Running Suite: Kubernetes e2e suite =================================== Random Seed: 1650664646 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 22 21:57:28.542: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.547: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 21:57:28.572: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 21:57:28.644: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 21:57:28.644: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 21:57:28.644: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 21:57:28.644: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 21:57:28.644: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 21:57:28.663: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 22 21:57:28.663: INFO: e2e test version: v1.21.9 Apr 22 21:57:28.664: INFO: kube-apiserver version: v1.21.1 Apr 22 21:57:28.664: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.671: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Apr 22 21:57:28.683: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.703: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 22 21:57:28.698: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.724: INFO: Cluster IP family: ipv4 Apr 22 21:57:28.696: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.724: INFO: Cluster IP family: ipv4 Apr 22 21:57:28.698: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.724: INFO: Cluster IP family: ipv4 Apr 22 21:57:28.704: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.724: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 22 21:57:28.696: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.725: INFO: Cluster IP family: ipv4 Apr 22 21:57:28.701: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.726: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Apr 22 21:57:28.705: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.729: INFO: Cluster IP family: ipv4 Apr 22 21:57:28.708: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:28.730: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch W0422 21:57:28.785472 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.785: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.787: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 22 21:57:28.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1109 7fb8baf4-d46e-4b15-9707-9292326ecbb4 31500 0 2022-04-22 21:57:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-22 21:57:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 21:57:28.808: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1109 7fb8baf4-d46e-4b15-9707-9292326ecbb4 31503 0 2022-04-22 21:57:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-22 21:57:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:28.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1109" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0422 21:57:28.789944 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.790: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.792: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:28.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-589" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller W0422 21:57:28.848763 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.849: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.851: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:28.858: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 22 21:57:30.892: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:31.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7305" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":1,"skipped":33,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch W0422 21:57:28.789645 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.790: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.791: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:34.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8650" for this suite. • [SLOW TEST:5.908 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W0422 21:57:28.791077 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.791: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.793: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c1dd49f3-5c4e-4b7a-aaeb-734accbcfe5b STEP: Creating a pod to test consume configMaps Apr 22 21:57:28.810: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78" in namespace "configmap-9896" to be "Succeeded or Failed" Apr 22 21:57:28.812: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339879ms Apr 22 21:57:30.817: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007007863s Apr 22 21:57:32.823: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012860866s Apr 22 21:57:34.827: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017345771s Apr 22 21:57:36.831: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021582119s STEP: Saw pod success Apr 22 21:57:36.831: INFO: Pod "pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78" satisfied condition "Succeeded or Failed" Apr 22 21:57:36.834: INFO: Trying to get logs from node node2 pod pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78 container agnhost-container: STEP: delete the pod Apr 22 21:57:36.852: INFO: Waiting for pod pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78 to disappear Apr 22 21:57:36.854: INFO: Pod pod-configmaps-a6e65eee-e0f2-43b7-930b-ccff5d909c78 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:36.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9896" for this suite. • [SLOW TEST:8.100 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0422 21:57:28.731747 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.732: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.735: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-0989e391-f604-4bc9-80f2-a800cc4b3ff5 STEP: Creating a pod to test consume configMaps Apr 22 21:57:28.759: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d" in namespace "projected-1225" to be "Succeeded or Failed" Apr 22 21:57:28.765: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788116ms Apr 22 21:57:30.768: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009789474s Apr 22 21:57:32.772: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013390457s Apr 22 21:57:34.779: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020070158s Apr 22 21:57:36.783: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024409299s Apr 22 21:57:38.787: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.028828085s STEP: Saw pod success Apr 22 21:57:38.787: INFO: Pod "pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d" satisfied condition "Succeeded or Failed" Apr 22 21:57:38.791: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d container agnhost-container: STEP: delete the pod Apr 22 21:57:38.819: INFO: Waiting for pod pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d to disappear Apr 22 21:57:38.821: INFO: Pod pod-projected-configmaps-fd50c871-875a-40ab-bea3-be351a9df37d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:38.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1225" for this suite. • [SLOW TEST:10.129 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:34.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:57:34.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb" in namespace "downward-api-3469" to be "Succeeded or Failed" Apr 22 21:57:34.808: INFO: Pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187233ms Apr 22 21:57:36.810: INFO: Pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004749004s Apr 22 21:57:38.813: INFO: Pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007532351s Apr 22 21:57:40.817: INFO: Pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011432771s STEP: Saw pod success Apr 22 21:57:40.817: INFO: Pod "downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb" satisfied condition "Succeeded or Failed" Apr 22 21:57:40.819: INFO: Trying to get logs from node node1 pod downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb container client-container: STEP: delete the pod Apr 22 21:57:40.848: INFO: Waiting for pod downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb to disappear Apr 22 21:57:40.850: INFO: Pod downwardapi-volume-86e0bd33-aa00-4263-9fde-8e6aaed1cedb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3469" for this suite. • [SLOW TEST:6.103 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0422 21:57:28.872789 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.873: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.875: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-813ad0dd-49da-4573-96a7-eb9b4fc9fd66 STEP: Creating a pod to test consume secrets Apr 22 21:57:28.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb" in namespace "projected-3316" to be "Succeeded or Failed" Apr 22 21:57:28.893: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287037ms Apr 22 21:57:30.896: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005376888s Apr 22 21:57:32.899: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00838276s Apr 22 21:57:34.903: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012226832s Apr 22 21:57:36.907: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015767387s Apr 22 21:57:38.910: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019056746s Apr 22 21:57:40.916: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025095996s STEP: Saw pod success Apr 22 21:57:40.916: INFO: Pod "pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb" satisfied condition "Succeeded or Failed" Apr 22 21:57:40.919: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb container projected-secret-volume-test: STEP: delete the pod Apr 22 21:57:40.929: INFO: Waiting for pod pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb to disappear Apr 22 21:57:40.931: INFO: Pod pod-projected-secrets-5a14ea3b-4554-41fe-a470-e65eb262a0bb no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:40.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3316" for this suite. • [SLOW TEST:12.112 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":38,"failed":0} S ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:40.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:40.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8416" for this suite. •S ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":3,"skipped":72,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:31.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-5af8f69f-aab7-4d3b-abb8-b0e4fc48ac40 STEP: Creating a pod to test consume secrets Apr 22 21:57:31.957: INFO: Waiting up to 5m0s for pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a" in namespace "secrets-330" to be "Succeeded or Failed" Apr 22 21:57:31.959: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.924165ms Apr 22 21:57:33.963: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005494648s Apr 22 21:57:35.967: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009960497s Apr 22 21:57:37.971: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013874945s Apr 22 21:57:39.977: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019847828s Apr 22 21:57:41.981: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023891117s STEP: Saw pod success Apr 22 21:57:41.981: INFO: Pod "pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a" satisfied condition "Succeeded or Failed" Apr 22 21:57:41.984: INFO: Trying to get logs from node node2 pod pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a container secret-volume-test: STEP: delete the pod Apr 22 21:57:42.044: INFO: Waiting for pod pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a to disappear Apr 22 21:57:42.050: INFO: Pod pod-secrets-b8db46c3-23fc-4766-a385-c843069f3d0a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:42.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-330" for this suite. • [SLOW TEST:10.138 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi W0422 21:57:28.813018 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.813: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.815: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:28.817: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 21:57:37.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6722 --namespace=crd-publish-openapi-6722 create -f -' Apr 22 21:57:37.820: INFO: stderr: "" Apr 22 21:57:37.820: INFO: stdout: "e2e-test-crd-publish-openapi-735-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 21:57:37.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6722 --namespace=crd-publish-openapi-6722 delete e2e-test-crd-publish-openapi-735-crds test-cr' Apr 22 21:57:38.005: INFO: stderr: "" Apr 22 21:57:38.005: INFO: stdout: "e2e-test-crd-publish-openapi-735-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 22 21:57:38.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6722 --namespace=crd-publish-openapi-6722 apply -f -' Apr 22 21:57:38.369: INFO: stderr: "" Apr 22 21:57:38.369: INFO: stdout: "e2e-test-crd-publish-openapi-735-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 21:57:38.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6722 --namespace=crd-publish-openapi-6722 delete e2e-test-crd-publish-openapi-735-crds test-cr' Apr 22 21:57:38.535: INFO: stderr: "" Apr 22 21:57:38.536: INFO: stdout: "e2e-test-crd-publish-openapi-735-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 22 21:57:38.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6722 explain e2e-test-crd-publish-openapi-735-crds' Apr 22 21:57:38.895: INFO: stderr: "" Apr 22 21:57:38.895: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-735-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6722" for this suite. • [SLOW TEST:13.762 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset W0422 21:57:28.807228 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.807: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.809: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:28.812: INFO: Creating ReplicaSet my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683 Apr 22 21:57:28.818: INFO: Pod name my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683: Found 0 pods out of 1 Apr 22 21:57:33.823: INFO: Pod name my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683: Found 1 pods out of 1 Apr 22 21:57:33.823: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683" is running Apr 22 21:57:37.829: INFO: Pod "my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683-2lfwq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:28 +0000 UTC Reason: Message:}]) Apr 22 21:57:37.830: INFO: Trying to dial the pod Apr 22 21:57:42.838: INFO: Controller my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683: Got expected result from replica 1 [my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683-2lfwq]: "my-hostname-basic-4f488d58-bea9-4b1b-8404-88eb12f17683-2lfwq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:42.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-159" for this suite. • [SLOW TEST:14.060 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:42.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Apr 22 21:57:42.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 create -f -' Apr 22 21:57:42.507: INFO: stderr: "" Apr 22 21:57:42.507: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Apr 22 21:57:42.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 diff -f -' Apr 22 21:57:42.856: INFO: rc: 1 Apr 22 21:57:42.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 delete -f -' Apr 22 21:57:42.967: INFO: stderr: "" Apr 22 21:57:42.967: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:42.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6623" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":3,"skipped":54,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:40.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-3ce08b1a-3f2f-436c-88b7-deab153f0722 STEP: Creating a pod to test consume configMaps Apr 22 21:57:41.010: INFO: Waiting up to 5m0s for pod "pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56" in namespace "configmap-8039" to be "Succeeded or Failed" Apr 22 21:57:41.012: INFO: Pod "pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56": Phase="Pending", Reason="", readiness=false. Elapsed: 1.751309ms Apr 22 21:57:43.014: INFO: Pod "pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004172956s Apr 22 21:57:45.018: INFO: Pod "pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007818072s STEP: Saw pod success Apr 22 21:57:45.018: INFO: Pod "pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56" satisfied condition "Succeeded or Failed" Apr 22 21:57:45.020: INFO: Trying to get logs from node node1 pod pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56 container agnhost-container: STEP: delete the pod Apr 22 21:57:45.031: INFO: Waiting for pod pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56 to disappear Apr 22 21:57:45.033: INFO: Pod pod-configmaps-a43aa223-aa9c-48af-b454-18f16a62ff56 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:45.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8039" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":80,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:42.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 21:57:45.641: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:45.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5474" for this suite. • ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job W0422 21:57:28.823225 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 21:57:28.823: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 21:57:28.825: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:46.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6156" for this suite. • [SLOW TEST:18.045 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:42.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:46.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-220" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:38.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:44.939: INFO: Deleting pod "var-expansion-2e818a7c-97e8-4b18-b09a-0185c38511c9" in namespace "var-expansion-6527" Apr 22 21:57:44.943: INFO: Wait up to 5m0s for pod "var-expansion-2e818a7c-97e8-4b18-b09a-0185c38511c9" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:48.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6527" for this suite. • [SLOW TEST:10.059 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:48.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:49.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8340" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:45.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:49.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7349" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":92,"failed":0} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:49.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 22 21:57:49.160: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 22 21:57:49.163: INFO: starting watch STEP: patching STEP: updating Apr 22 21:57:49.172: INFO: waiting for watch events with expected annotations Apr 22 21:57:49.172: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:49.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3171" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":6,"skipped":92,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:36.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4 Apr 22 21:57:36.929: INFO: Pod name my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4: Found 0 pods out of 1 Apr 22 21:57:41.932: INFO: Pod name my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4: Found 1 pods out of 1 Apr 22 21:57:41.932: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4" are running Apr 22 21:57:45.939: INFO: Pod "my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4-22qtx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:36 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 21:57:36 +0000 UTC Reason: Message:}]) Apr 22 21:57:45.940: INFO: Trying to dial the pod Apr 22 21:57:50.951: INFO: Controller my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4: Got expected result from replica 1 [my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4-22qtx]: "my-hostname-basic-294ff2a0-4933-45a7-8824-ad4c81f80df4-22qtx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:50.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6051" for this suite. • [SLOW TEST:14.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:40.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Apr 22 21:57:51.050: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 21:57:51.115: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:51.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3911" for this suite. • [SLOW TEST:10.155 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":2,"skipped":45,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:51.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 22 21:57:51.177: INFO: starting watch STEP: patching STEP: updating Apr 22 21:57:51.184: INFO: waiting for watch events with expected annotations Apr 22 21:57:51.184: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:51.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6963" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:46.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:57:47.243: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:57:49.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:57:51.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:57:53.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:57:56.260: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:56.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3251" for this suite. STEP: Destroying namespace "webhook-3251-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.385 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":50,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:49.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Apr 22 21:57:49.266: INFO: The status of Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:51.269: INFO: The status of Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:53.270: INFO: The status of Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:55.270: INFO: The status of Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:57.268: INFO: The status of Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 is Running (Ready = true) Apr 22 21:57:57.273: INFO: Pod pod-hostip-b73b708d-cb90-41f6-b5e0-85f4d50ad248 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-959" for this suite. • [SLOW TEST:8.078 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":94,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:45.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:45.681: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 21:57:53.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-670 --namespace=crd-publish-openapi-670 create -f -' Apr 22 21:57:54.299: INFO: stderr: "" Apr 22 21:57:54.299: INFO: stdout: "e2e-test-crd-publish-openapi-6846-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 21:57:54.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-670 --namespace=crd-publish-openapi-670 delete e2e-test-crd-publish-openapi-6846-crds test-cr' Apr 22 21:57:54.451: INFO: stderr: "" Apr 22 21:57:54.451: INFO: stdout: "e2e-test-crd-publish-openapi-6846-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 22 21:57:54.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-670 --namespace=crd-publish-openapi-670 apply -f -' Apr 22 21:57:54.860: INFO: stderr: "" Apr 22 21:57:54.861: INFO: stdout: "e2e-test-crd-publish-openapi-6846-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 21:57:54.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-670 --namespace=crd-publish-openapi-670 delete e2e-test-crd-publish-openapi-6846-crds test-cr' Apr 22 21:57:55.045: INFO: stderr: "" Apr 22 21:57:55.045: INFO: stdout: "e2e-test-crd-publish-openapi-6846-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 22 21:57:55.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-670 explain e2e-test-crd-publish-openapi-6846-crds' Apr 22 21:57:55.407: INFO: stderr: "" Apr 22 21:57:55.407: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6846-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:59.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-670" for this suite. • [SLOW TEST:13.951 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:59.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:59.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-833 version' Apr 22 21:57:59.739: INFO: stderr: "" Apr 22 21:57:59.739: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:57:59.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-833" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:46.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 22 21:57:47.160: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 22 21:57:49.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261467, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:57:52.175: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:52.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:00.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8523" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.421 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:00.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics Apr 22 21:58:06.545: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 21:58:06.608: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:06.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6527" for this suite. • [SLOW TEST:6.131 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":4,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:57.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:57.309: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Apr 22 21:57:57.323: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:59.326: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:01.328: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:03.327: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:05.328: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:07.327: INFO: The status of Pod pod-logs-websocket-19728768-964b-4c95-ad82-c836a327e630 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:07.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2508" for this suite. • [SLOW TEST:10.063 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:59.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:57:59.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 create -f -' Apr 22 21:58:00.166: INFO: stderr: "" Apr 22 21:58:00.166: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Apr 22 21:58:00.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 create -f -' Apr 22 21:58:00.481: INFO: stderr: "" Apr 22 21:58:00.481: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 22 21:58:01.484: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:01.484: INFO: Found 0 / 1 Apr 22 21:58:02.485: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:02.485: INFO: Found 0 / 1 Apr 22 21:58:03.484: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:03.484: INFO: Found 0 / 1 Apr 22 21:58:04.484: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:04.484: INFO: Found 0 / 1 Apr 22 21:58:05.484: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:05.484: INFO: Found 0 / 1 Apr 22 21:58:06.483: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:06.483: INFO: Found 1 / 1 Apr 22 21:58:06.483: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 21:58:06.486: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 21:58:06.486: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 21:58:06.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 describe pod agnhost-primary-fqc69' Apr 22 21:58:06.677: INFO: stderr: "" Apr 22 21:58:06.677: INFO: stdout: "Name: agnhost-primary-fqc69\nNamespace: kubectl-3587\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 22 Apr 2022 21:58:00 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.17\"\n ],\n \"mac\": \"6a:be:f3:fb:3f:f9\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.17\"\n ],\n \"mac\": \"6a:be:f3:fb:3f:f9\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.17\nIPs:\n IP: 10.244.4.17\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://9f95f24add6c1c757dfff15c8edf4fe5ad740cfbdd10141b213f03a901b1d9af\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 22 Apr 2022 21:58:04 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gsgdw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-gsgdw:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-3587/agnhost-primary-fqc69 to node2\n Normal Pulling 3s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 3s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 356.250343ms\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Apr 22 21:58:06.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 describe rc agnhost-primary' Apr 22 21:58:06.886: INFO: stderr: "" Apr 22 21:58:06.886: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3587\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-primary-fqc69\n" Apr 22 21:58:06.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 describe service agnhost-primary' Apr 22 21:58:07.064: INFO: stderr: "" Apr 22 21:58:07.064: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3587\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.21.118\nIPs: 10.233.21.118\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.17:6379\nSession Affinity: None\nEvents: \n" Apr 22 21:58:07.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 describe node master1' Apr 22 21:58:07.288: INFO: stderr: "" Apr 22 21:58:07.288: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 22 Apr 2022 19:56:45 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 22 Apr 2022 21:58:04 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 22 Apr 2022 20:02:32 +0000 Fri, 22 Apr 2022 20:02:32 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 22 Apr 2022 21:57:57 +0000 Fri, 22 Apr 2022 19:56:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 22 Apr 2022 21:57:57 +0000 Fri, 22 Apr 2022 19:56:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 22 Apr 2022 21:57:57 +0000 Fri, 22 Apr 2022 19:56:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 22 Apr 2022 21:57:57 +0000 Fri, 22 Apr 2022 19:59:45 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518300Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629468Ki\n pods: 110\nSystem Info:\n Machine ID: 025a90e4dec046189b065fcf68380be7\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 7e907077-ed98-4d46-8305-29673eaf3bf3\n Kernel Version: 3.10.0-1160.62.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.14\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-7r6xc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 113m\n kube-system dns-autoscaler-7df78bfcfb-smkxp 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 117m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 120m\n kube-system kube-flannel-6vhmq 150m (0%) 300m (0%) 64M (0%) 500M (0%) 118m\n kube-system kube-multus-ds-amd64-px448 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 118m\n kube-system kube-proxy-hfgsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 101m\n monitoring node-exporter-b7qpl 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 104m\n monitoring prometheus-operator-585ccfb458-zsrdh 100m (0%) 200m (0%) 100Mi (0%) 200Mi (0%) 104m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1032m (1%) 870m (1%)\n memory 472100Ki (0%) 1034773760 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 22 21:58:07.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3587 describe namespace kubectl-3587' Apr 22 21:58:07.464: INFO: stderr: "" Apr 22 21:58:07.464: INFO: stdout: "Name: kubectl-3587\nLabels: e2e-framework=kubectl\n e2e-run=15f3ab23-f80d-4acc-bce3-7cc9e896190d\n kubernetes.io/metadata.name=kubectl-3587\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:07.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3587" for this suite. • [SLOW TEST:7.721 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:56.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:57:56.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b" in namespace "downward-api-1821" to be "Succeeded or Failed" Apr 22 21:57:56.369: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003148ms Apr 22 21:57:58.372: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005329752s Apr 22 21:58:00.376: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008642429s Apr 22 21:58:02.379: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011498825s Apr 22 21:58:04.382: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014961761s Apr 22 21:58:06.387: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019586566s Apr 22 21:58:08.390: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.023017186s STEP: Saw pod success Apr 22 21:58:08.390: INFO: Pod "downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b" satisfied condition "Succeeded or Failed" Apr 22 21:58:08.392: INFO: Trying to get logs from node node2 pod downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b container client-container: STEP: delete the pod Apr 22 21:58:08.402: INFO: Waiting for pod downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b to disappear Apr 22 21:58:08.404: INFO: Pod downwardapi-volume-b7022bd7-1836-4362-9924-5fa1deb4e08b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:08.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1821" for this suite. • [SLOW TEST:12.088 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:42.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 22 21:57:43.003: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:51.680: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:11.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-569" for this suite. • [SLOW TEST:28.937 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:51.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9p57z in namespace proxy-2527 I0422 21:57:51.253659 39 runners.go:190] Created replication controller with name: proxy-service-9p57z, namespace: proxy-2527, replica count: 1 I0422 21:57:52.303970 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:53.305015 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:54.306046 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:55.306771 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:56.307288 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:57.307830 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:58.309035 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:57:59.309711 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:00.310787 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:01.311494 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:02.312850 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:03.313282 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:04.313977 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:05.314812 39 runners.go:190] proxy-service-9p57z Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:58:05.317: INFO: setup took 14.071989238s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 22 21:58:05.319: INFO: (0) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.517974ms) Apr 22 21:58:05.320: INFO: (0) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 2.882437ms) Apr 22 21:58:05.320: INFO: (0) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.832498ms) Apr 22 21:58:05.320: INFO: (0) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.942729ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 4.830133ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 4.856078ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 5.145925ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 4.890587ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 5.004482ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 5.032045ms) Apr 22 21:58:05.322: INFO: (0) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 5.079196ms) Apr 22 21:58:05.325: INFO: (0) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 8.677572ms) Apr 22 21:58:05.326: INFO: (0) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 8.749517ms) Apr 22 21:58:05.326: INFO: (0) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test<... (200; 2.287414ms) Apr 22 21:58:05.328: INFO: (1) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.91943ms) Apr 22 21:58:05.329: INFO: (1) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.0768ms) Apr 22 21:58:05.329: INFO: (1) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 3.175027ms) Apr 22 21:58:05.329: INFO: (1) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.496525ms) Apr 22 21:58:05.329: INFO: (1) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.583499ms) Apr 22 21:58:05.330: INFO: (1) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 3.762062ms) Apr 22 21:58:05.330: INFO: (1) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.855629ms) Apr 22 21:58:05.330: INFO: (1) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 4.019477ms) Apr 22 21:58:05.330: INFO: (1) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 4.11367ms) Apr 22 21:58:05.330: INFO: (1) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 4.43277ms) Apr 22 21:58:05.332: INFO: (2) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.003581ms) Apr 22 21:58:05.333: INFO: (2) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.282761ms) Apr 22 21:58:05.333: INFO: (2) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.44284ms) Apr 22 21:58:05.333: INFO: (2) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.772297ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.017727ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 3.213168ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.137848ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 3.238621ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.326665ms) Apr 22 21:58:05.334: INFO: (2) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.202562ms) Apr 22 21:58:05.337: INFO: (3) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.254358ms) Apr 22 21:58:05.337: INFO: (3) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.367304ms) Apr 22 21:58:05.337: INFO: (3) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.596527ms) Apr 22 21:58:05.337: INFO: (3) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.600455ms) Apr 22 21:58:05.337: INFO: (3) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 3.110599ms) Apr 22 21:58:05.338: INFO: (3) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.155085ms) Apr 22 21:58:05.338: INFO: (3) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.695065ms) Apr 22 21:58:05.338: INFO: (3) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.587807ms) Apr 22 21:58:05.339: INFO: (3) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.895008ms) Apr 22 21:58:05.339: INFO: (3) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.998117ms) Apr 22 21:58:05.339: INFO: (3) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 4.150188ms) Apr 22 21:58:05.341: INFO: (4) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 1.912897ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 2.472925ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 2.649043ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.83262ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.116326ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.223741ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 3.072789ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.13823ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.104323ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.50574ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.270681ms) Apr 22 21:58:05.342: INFO: (4) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.386717ms) Apr 22 21:58:05.343: INFO: (4) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.523729ms) Apr 22 21:58:05.345: INFO: (5) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.313293ms) Apr 22 21:58:05.345: INFO: (5) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.352338ms) Apr 22 21:58:05.345: INFO: (5) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.864722ms) Apr 22 21:58:05.346: INFO: (5) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.809352ms) Apr 22 21:58:05.346: INFO: (5) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.158846ms) Apr 22 21:58:05.346: INFO: (5) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.255834ms) Apr 22 21:58:05.346: INFO: (5) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.291223ms) Apr 22 21:58:05.346: INFO: (5) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.485675ms) Apr 22 21:58:05.347: INFO: (5) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.690893ms) Apr 22 21:58:05.347: INFO: (5) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.711734ms) Apr 22 21:58:05.347: INFO: (5) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.953926ms) Apr 22 21:58:05.347: INFO: (5) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 4.18526ms) Apr 22 21:58:05.350: INFO: (6) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.77499ms) Apr 22 21:58:05.350: INFO: (6) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.864804ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.128772ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.475729ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.396577ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.429092ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.476864ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 3.966148ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 3.787754ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.933346ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.880854ms) Apr 22 21:58:05.351: INFO: (6) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 4.035181ms) Apr 22 21:58:05.352: INFO: (6) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 4.111065ms) Apr 22 21:58:05.352: INFO: (6) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 4.357014ms) Apr 22 21:58:05.352: INFO: (6) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 4.436076ms) Apr 22 21:58:05.354: INFO: (7) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 1.80364ms) Apr 22 21:58:05.354: INFO: (7) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.19508ms) Apr 22 21:58:05.354: INFO: (7) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.115559ms) Apr 22 21:58:05.354: INFO: (7) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.380603ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.495189ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.630064ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.982858ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.953539ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.956053ms) Apr 22 21:58:05.355: INFO: (7) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.267613ms) Apr 22 21:58:05.356: INFO: (7) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.487804ms) Apr 22 21:58:05.356: INFO: (7) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.663949ms) Apr 22 21:58:05.356: INFO: (7) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.711269ms) Apr 22 21:58:05.356: INFO: (7) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.9377ms) Apr 22 21:58:05.356: INFO: (7) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.98035ms) Apr 22 21:58:05.358: INFO: (8) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 1.824954ms) Apr 22 21:58:05.359: INFO: (8) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 2.274ms) Apr 22 21:58:05.359: INFO: (8) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.278334ms) Apr 22 21:58:05.359: INFO: (8) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.431985ms) Apr 22 21:58:05.359: INFO: (8) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.729682ms) Apr 22 21:58:05.359: INFO: (8) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.010143ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.075454ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 3.2817ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.519058ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.68864ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.722029ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.866149ms) Apr 22 21:58:05.360: INFO: (8) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.886885ms) Apr 22 21:58:05.361: INFO: (8) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 4.061203ms) Apr 22 21:58:05.363: INFO: (9) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.23013ms) Apr 22 21:58:05.363: INFO: (9) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.222154ms) Apr 22 21:58:05.363: INFO: (9) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.341582ms) Apr 22 21:58:05.363: INFO: (9) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.392524ms) Apr 22 21:58:05.363: INFO: (9) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.434864ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.113762ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.992837ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 3.046721ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 3.037319ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.292376ms) Apr 22 21:58:05.364: INFO: (9) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.617878ms) Apr 22 21:58:05.365: INFO: (9) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.929374ms) Apr 22 21:58:05.365: INFO: (9) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.914129ms) Apr 22 21:58:05.365: INFO: (9) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 4.077418ms) Apr 22 21:58:05.365: INFO: (9) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 4.458348ms) Apr 22 21:58:05.367: INFO: (10) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 1.745593ms) Apr 22 21:58:05.368: INFO: (10) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.428689ms) Apr 22 21:58:05.368: INFO: (10) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test<... (200; 2.573706ms) Apr 22 21:58:05.368: INFO: (10) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 2.885945ms) Apr 22 21:58:05.368: INFO: (10) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.076899ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.21397ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.236677ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.184105ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 3.268137ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.869571ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.827162ms) Apr 22 21:58:05.369: INFO: (10) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.971232ms) Apr 22 21:58:05.370: INFO: (10) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 4.20458ms) Apr 22 21:58:05.372: INFO: (11) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 1.934644ms) Apr 22 21:58:05.372: INFO: (11) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.190364ms) Apr 22 21:58:05.372: INFO: (11) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.491347ms) Apr 22 21:58:05.372: INFO: (11) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.626339ms) Apr 22 21:58:05.372: INFO: (11) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.65724ms) Apr 22 21:58:05.373: INFO: (11) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.817202ms) Apr 22 21:58:05.373: INFO: (11) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 3.247294ms) Apr 22 21:58:05.373: INFO: (11) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.516415ms) Apr 22 21:58:05.373: INFO: (11) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.400891ms) Apr 22 21:58:05.373: INFO: (11) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.604007ms) Apr 22 21:58:05.374: INFO: (11) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.761647ms) Apr 22 21:58:05.374: INFO: (11) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.717043ms) Apr 22 21:58:05.374: INFO: (11) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 4.196773ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.542175ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.676513ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.518768ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.020939ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 3.088044ms) Apr 22 21:58:05.377: INFO: (12) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.116705ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.187053ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.27203ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 3.459887ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test<... (200; 3.483004ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.627676ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 4.116516ms) Apr 22 21:58:05.378: INFO: (12) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 4.160924ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.313534ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.371148ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.359106ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 2.738639ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.795662ms) Apr 22 21:58:05.381: INFO: (13) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.8621ms) Apr 22 21:58:05.382: INFO: (13) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.954295ms) Apr 22 21:58:05.382: INFO: (13) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.042961ms) Apr 22 21:58:05.382: INFO: (13) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.193971ms) Apr 22 21:58:05.382: INFO: (13) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.361662ms) Apr 22 21:58:05.382: INFO: (13) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.619157ms) Apr 22 21:58:05.383: INFO: (13) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.99035ms) Apr 22 21:58:05.383: INFO: (13) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.946011ms) Apr 22 21:58:05.383: INFO: (13) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.918566ms) Apr 22 21:58:05.383: INFO: (13) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 4.20192ms) Apr 22 21:58:05.385: INFO: (14) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.349241ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.550624ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.525992ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.67293ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 2.864018ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.749713ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 2.691629ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.115166ms) Apr 22 21:58:05.386: INFO: (14) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: test (200; 3.272666ms) Apr 22 21:58:05.387: INFO: (14) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.468825ms) Apr 22 21:58:05.387: INFO: (14) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.541505ms) Apr 22 21:58:05.387: INFO: (14) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.946491ms) Apr 22 21:58:05.387: INFO: (14) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 4.009988ms) Apr 22 21:58:05.389: INFO: (15) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 1.933535ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.267993ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.353819ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.32259ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 2.893251ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.999393ms) Apr 22 21:58:05.390: INFO: (15) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.866959ms) Apr 22 21:58:05.391: INFO: (15) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 3.057662ms) Apr 22 21:58:05.391: INFO: (15) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 3.004647ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.926242ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.941321ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.918687ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.142663ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.109075ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.203759ms) Apr 22 21:58:05.395: INFO: (16) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.721484ms) Apr 22 21:58:05.398: INFO: (17) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.708765ms) Apr 22 21:58:05.398: INFO: (17) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.846247ms) Apr 22 21:58:05.398: INFO: (17) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.86876ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 3.056908ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.133404ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 3.405322ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname2/proxy/: bar (200; 3.443627ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname1/proxy/: foo (200; 3.621677ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname2/proxy/: tls qux (200; 3.703095ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/https:proxy-service-9p57z:tlsportname1/proxy/: tls baz (200; 3.621446ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.975803ms) Apr 22 21:58:05.399: INFO: (17) /api/v1/namespaces/proxy-2527/services/proxy-service-9p57z:portname1/proxy/: foo (200; 3.882119ms) Apr 22 21:58:05.402: INFO: (18) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.255012ms) Apr 22 21:58:05.402: INFO: (18) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.266614ms) Apr 22 21:58:05.402: INFO: (18) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.652454ms) Apr 22 21:58:05.402: INFO: (18) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.692311ms) Apr 22 21:58:05.402: INFO: (18) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.657022ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:1080/proxy/: ... (200; 3.170962ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:462/proxy/: tls qux (200; 3.411279ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 3.445484ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 3.439292ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/services/http:proxy-service-9p57z:portname2/proxy/: bar (200; 3.644743ms) Apr 22 21:58:05.403: INFO: (18) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: ... (200; 2.160626ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:160/proxy/: foo (200; 2.399246ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg:1080/proxy/: test<... (200; 2.571357ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:460/proxy/: tls baz (200; 2.544817ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/proxy-service-9p57z-brxpg/proxy/: test (200; 2.898071ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/http:proxy-service-9p57z-brxpg:162/proxy/: bar (200; 2.940395ms) Apr 22 21:58:05.407: INFO: (19) /api/v1/namespaces/proxy-2527/pods/https:proxy-service-9p57z-brxpg:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:58:06.676: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Apr 22 21:58:06.692: INFO: The status of Pod pod-exec-websocket-9335814c-54d4-49cf-bc4b-ecbe4d594ac3 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:08.694: INFO: The status of Pod pod-exec-websocket-9335814c-54d4-49cf-bc4b-ecbe4d594ac3 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:10.696: INFO: The status of Pod pod-exec-websocket-9335814c-54d4-49cf-bc4b-ecbe4d594ac3 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:12.695: INFO: The status of Pod pod-exec-websocket-9335814c-54d4-49cf-bc4b-ecbe4d594ac3 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:13.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2833" for this suite. • [SLOW TEST:6.358 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:07.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 21:58:07.401: INFO: Waiting up to 5m0s for pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6" in namespace "security-context-8760" to be "Succeeded or Failed" Apr 22 21:58:07.403: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.843992ms Apr 22 21:58:09.407: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005807696s Apr 22 21:58:11.410: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009269677s Apr 22 21:58:13.416: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014393379s Apr 22 21:58:15.419: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017782246s STEP: Saw pod success Apr 22 21:58:15.419: INFO: Pod "security-context-c2e25f04-432a-4587-ab23-914c845499d6" satisfied condition "Succeeded or Failed" Apr 22 21:58:15.421: INFO: Trying to get logs from node node1 pod security-context-c2e25f04-432a-4587-ab23-914c845499d6 container test-container: STEP: delete the pod Apr 22 21:58:15.433: INFO: Waiting for pod security-context-c2e25f04-432a-4587-ab23-914c845499d6 to disappear Apr 22 21:58:15.435: INFO: Pod security-context-c2e25f04-432a-4587-ab23-914c845499d6 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:15.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8760" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":104,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:11.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-a448b38e-9059-4dd0-856f-13f4ade446f8 STEP: Creating secret with name secret-projected-all-test-volume-640cec77-7e1f-41c0-8c2d-ed7e46e92c6e STEP: Creating a pod to test Check all projections for projected volume plugin Apr 22 21:58:11.973: INFO: Waiting up to 5m0s for pod "projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2" in namespace "projected-1132" to be "Succeeded or Failed" Apr 22 21:58:11.975: INFO: Pod "projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15452ms Apr 22 21:58:13.979: INFO: Pod "projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005892938s Apr 22 21:58:15.983: INFO: Pod "projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009971495s STEP: Saw pod success Apr 22 21:58:15.983: INFO: Pod "projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2" satisfied condition "Succeeded or Failed" Apr 22 21:58:15.986: INFO: Trying to get logs from node node1 pod projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2 container projected-all-volume-test: STEP: delete the pod Apr 22 21:58:15.998: INFO: Waiting for pod projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2 to disappear Apr 22 21:58:16.001: INFO: Pod projected-volume-4b329144-6081-4c87-8817-712d5eff2bf2 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:16.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1132" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:08.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:58:08.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692" in namespace "projected-234" to be "Succeeded or Failed" Apr 22 21:58:08.458: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233496ms Apr 22 21:58:10.461: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005002209s Apr 22 21:58:12.464: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007981s Apr 22 21:58:14.469: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013236558s Apr 22 21:58:16.474: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017761534s STEP: Saw pod success Apr 22 21:58:16.474: INFO: Pod "downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692" satisfied condition "Succeeded or Failed" Apr 22 21:58:16.476: INFO: Trying to get logs from node node2 pod downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692 container client-container: STEP: delete the pod Apr 22 21:58:16.488: INFO: Waiting for pod downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692 to disappear Apr 22 21:58:16.491: INFO: Pod downwardapi-volume-10af5959-eee7-484c-a1ae-5b56f791d692 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:16.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-234" for this suite. • [SLOW TEST:8.073 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:07.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Apr 22 21:58:07.507: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 21:58:12.510: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:16.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3776" for this suite. • [SLOW TEST:9.059 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:49.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 22 21:57:49.136: INFO: >>> kubeConfig: /root/.kube/config Apr 22 21:57:57.803: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:16.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1585" for this suite. • [SLOW TEST:27.465 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":4,"skipped":91,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:16.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:58:16.595: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-bb3f367b-f92a-441b-bf59-4dac5223bf90" in namespace "security-context-test-5528" to be "Succeeded or Failed" Apr 22 21:58:16.597: INFO: Pod "busybox-readonly-false-bb3f367b-f92a-441b-bf59-4dac5223bf90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106281ms Apr 22 21:58:18.601: INFO: Pod "busybox-readonly-false-bb3f367b-f92a-441b-bf59-4dac5223bf90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005519442s Apr 22 21:58:20.606: INFO: Pod "busybox-readonly-false-bb3f367b-f92a-441b-bf59-4dac5223bf90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010995387s Apr 22 21:58:20.606: INFO: Pod "busybox-readonly-false-bb3f367b-f92a-441b-bf59-4dac5223bf90" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:20.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5528" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:15.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-384eb545-3f2d-4d7d-8ab3-009c59b87863 STEP: Creating a pod to test consume configMaps Apr 22 21:58:15.480: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29" in namespace "projected-2022" to be "Succeeded or Failed" Apr 22 21:58:15.483: INFO: Pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095422ms Apr 22 21:58:17.487: INFO: Pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00657322s Apr 22 21:58:19.490: INFO: Pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009932536s Apr 22 21:58:21.495: INFO: Pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014462033s STEP: Saw pod success Apr 22 21:58:21.495: INFO: Pod "pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29" satisfied condition "Succeeded or Failed" Apr 22 21:58:21.497: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29 container agnhost-container: STEP: delete the pod Apr 22 21:58:21.512: INFO: Waiting for pod pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29 to disappear Apr 22 21:58:21.514: INFO: Pod pod-projected-configmaps-a714c726-e715-4ae7-8cef-81e9c6292f29 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:21.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2022" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":106,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:21.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0422 21:58:21.577300 34 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Apr 22 21:58:21.586: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 22 21:58:21.589: INFO: starting watch STEP: patching STEP: updating Apr 22 21:58:21.606: INFO: waiting for watch events with expected annotations Apr 22 21:58:21.606: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:21.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9555" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":11,"skipped":121,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:16.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-ca1c2a02-208a-497f-acff-0dbbd495b4bc STEP: Creating a pod to test consume secrets Apr 22 21:58:16.053: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03" in namespace "projected-7045" to be "Succeeded or Failed" Apr 22 21:58:16.055: INFO: Pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035329ms Apr 22 21:58:18.057: INFO: Pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004824191s Apr 22 21:58:20.064: INFO: Pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011847557s Apr 22 21:58:22.068: INFO: Pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015285848s STEP: Saw pod success Apr 22 21:58:22.068: INFO: Pod "pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03" satisfied condition "Succeeded or Failed" Apr 22 21:58:22.071: INFO: Trying to get logs from node node1 pod pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03 container projected-secret-volume-test: STEP: delete the pod Apr 22 21:58:22.084: INFO: Waiting for pod pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03 to disappear Apr 22 21:58:22.086: INFO: Pod pod-projected-secrets-4c611773-38e8-4ccf-9682-ce553fc35e03 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:22.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7045" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:22.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8463" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":7,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:16.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Apr 22 21:58:16.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 create -f -' Apr 22 21:58:16.993: INFO: stderr: "" Apr 22 21:58:16.993: INFO: stdout: "pod/pause created\n" Apr 22 21:58:16.993: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 22 21:58:16.993: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-787" to be "running and ready" Apr 22 21:58:16.995: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233902ms Apr 22 21:58:18.999: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005416524s Apr 22 21:58:21.003: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010098144s Apr 22 21:58:21.003: INFO: Pod "pause" satisfied condition "running and ready" Apr 22 21:58:21.003: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Apr 22 21:58:21.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 label pods pause testing-label=testing-label-value' Apr 22 21:58:21.188: INFO: stderr: "" Apr 22 21:58:21.188: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 22 21:58:21.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 get pod pause -L testing-label' Apr 22 21:58:21.376: INFO: stderr: "" Apr 22 21:58:21.376: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 22 21:58:21.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 label pods pause testing-label-' Apr 22 21:58:21.564: INFO: stderr: "" Apr 22 21:58:21.564: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 22 21:58:21.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 get pod pause -L testing-label' Apr 22 21:58:21.723: INFO: stderr: "" Apr 22 21:58:21.723: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Apr 22 21:58:21.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 delete --grace-period=0 --force -f -' Apr 22 21:58:21.860: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 21:58:21.860: INFO: stdout: "pod \"pause\" force deleted\n" Apr 22 21:58:21.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 get rc,svc -l name=pause --no-headers' Apr 22 21:58:22.056: INFO: stderr: "No resources found in kubectl-787 namespace.\n" Apr 22 21:58:22.056: INFO: stdout: "" Apr 22 21:58:22.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-787 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 21:58:22.222: INFO: stderr: "" Apr 22 21:58:22.222: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:22.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-787" for this suite. • [SLOW TEST:5.667 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:16.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:58:16.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7" in namespace "projected-1940" to be "Succeeded or Failed" Apr 22 21:58:16.632: INFO: Pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.849749ms Apr 22 21:58:18.636: INFO: Pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006340639s Apr 22 21:58:20.639: INFO: Pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010217603s Apr 22 21:58:22.643: INFO: Pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013390098s STEP: Saw pod success Apr 22 21:58:22.643: INFO: Pod "downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7" satisfied condition "Succeeded or Failed" Apr 22 21:58:22.645: INFO: Trying to get logs from node node1 pod downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7 container client-container: STEP: delete the pod Apr 22 21:58:22.656: INFO: Waiting for pod downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7 to disappear Apr 22 21:58:22.658: INFO: Pod downwardapi-volume-ae4f465f-38df-4c9c-a0b2-6d925a56e6b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:22.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1940" for this suite. • [SLOW TEST:6.065 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":100,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:13.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:58:13.264: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:58:15.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:58:17.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:58:19.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:58:22.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:23.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5580" for this suite. STEP: Destroying namespace "webhook-5580-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.306 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:22.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:58:22.735: INFO: The status of Pod busybox-scheduling-0f308b24-cf18-410c-bc92-d4f2970ba416 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:24.739: INFO: The status of Pod busybox-scheduling-0f308b24-cf18-410c-bc92-d4f2970ba416 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:26.739: INFO: The status of Pod busybox-scheduling-0f308b24-cf18-410c-bc92-d4f2970ba416 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:28.739: INFO: The status of Pod busybox-scheduling-0f308b24-cf18-410c-bc92-d4f2970ba416 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:28.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9412" for this suite. • [SLOW TEST:6.058 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:21.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Apr 22 21:58:23.720: INFO: running pods: 0 < 3 Apr 22 21:58:25.723: INFO: running pods: 0 < 3 Apr 22 21:58:27.728: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:29.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4691" for this suite. • [SLOW TEST:8.076 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":12,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:22.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 22 21:58:22.478: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:58:22.491: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:58:24.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:58:26.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261502, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:58:29.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:30.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2970" for this suite. STEP: Destroying namespace "webhook-2970-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":8,"skipped":121,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:50.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Apr 22 21:58:31.030: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 21:58:31.092: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 22 21:58:31.093: INFO: Deleting pod "simpletest.rc-7j864" in namespace "gc-4540" Apr 22 21:58:31.100: INFO: Deleting pod "simpletest.rc-7pxj7" in namespace "gc-4540" Apr 22 21:58:31.106: INFO: Deleting pod "simpletest.rc-brmbj" in namespace "gc-4540" Apr 22 21:58:31.122: INFO: Deleting pod "simpletest.rc-ch47z" in namespace "gc-4540" Apr 22 21:58:31.127: INFO: Deleting pod "simpletest.rc-hh674" in namespace "gc-4540" Apr 22 21:58:31.133: INFO: Deleting pod "simpletest.rc-kszrc" in namespace "gc-4540" Apr 22 21:58:31.138: INFO: Deleting pod "simpletest.rc-nrg2g" in namespace "gc-4540" Apr 22 21:58:31.143: INFO: Deleting pod "simpletest.rc-sqb4z" in namespace "gc-4540" Apr 22 21:58:31.150: INFO: Deleting pod "simpletest.rc-sqkvb" in namespace "gc-4540" Apr 22 21:58:31.156: INFO: Deleting pod "simpletest.rc-xtrfp" in namespace "gc-4540" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:31.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4540" for this suite. • [SLOW TEST:40.209 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:31.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:31.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3313" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:28.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 21:58:34.870: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:34.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3675" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":141,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:34.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:34.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1733" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":8,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:31.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:58:31.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14" in namespace "projected-4975" to be "Succeeded or Failed" Apr 22 21:58:31.357: INFO: Pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08112ms Apr 22 21:58:33.361: INFO: Pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005930015s Apr 22 21:58:35.367: INFO: Pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012131472s Apr 22 21:58:37.370: INFO: Pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015270394s STEP: Saw pod success Apr 22 21:58:37.370: INFO: Pod "downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14" satisfied condition "Succeeded or Failed" Apr 22 21:58:37.372: INFO: Trying to get logs from node node1 pod downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14 container client-container: STEP: delete the pod Apr 22 21:58:37.388: INFO: Waiting for pod downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14 to disappear Apr 22 21:58:37.391: INFO: Pod downwardapi-volume-b3548bd4-3e20-496f-a9cd-43b3d33b1e14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:37.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4975" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:22.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[] Apr 22 21:58:22.318: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Apr 22 21:58:23.324: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9329 Apr 22 21:58:23.337: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:25.340: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:27.340: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:29.341: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod1:[80]] Apr 22 21:58:29.351: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9329 Apr 22 21:58:29.363: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:31.372: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:33.367: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:35.367: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:37.367: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod1:[80] pod2:[80]] Apr 22 21:58:37.378: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[pod2:[80]] Apr 22 21:58:37.391: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9329 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9329 to expose endpoints map[] Apr 22 21:58:37.401: INFO: successfully validated that service endpoint-test2 in namespace services-9329 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:37.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9329" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:15.129 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":8,"skipped":81,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:12.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4040 STEP: creating service affinity-clusterip-transition in namespace services-4040 STEP: creating replication controller affinity-clusterip-transition in namespace services-4040 I0422 21:58:12.612198 39 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4040, replica count: 3 I0422 21:58:15.663961 39 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:18.664888 39 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 21:58:21.666098 39 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 21:58:21.671: INFO: Creating new exec pod Apr 22 21:58:26.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4040 exec execpod-affinitycr2sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 22 21:58:27.074: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Apr 22 21:58:27.074: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 21:58:27.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4040 exec execpod-affinitycr2sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.11.148 80' Apr 22 21:58:27.317: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.11.148 80\nConnection to 10.233.11.148 80 port [tcp/http] succeeded!\n" Apr 22 21:58:27.318: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 21:58:27.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4040 exec execpod-affinitycr2sd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.11.148:80/ ; done' Apr 22 21:58:27.622: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n" Apr 22 21:58:27.622: INFO: stdout: "\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-qb496\naffinity-clusterip-transition-jzks5\naffinity-clusterip-transition-q2qz9" Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-qb496 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-jzks5 Apr 22 21:58:27.622: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4040 exec execpod-affinitycr2sd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.11.148:80/ ; done' Apr 22 21:58:27.927: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.148:80/\n" Apr 22 21:58:27.928: INFO: stdout: "\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9\naffinity-clusterip-transition-q2qz9" Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Received response from host: affinity-clusterip-transition-q2qz9 Apr 22 21:58:27.928: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4040, will wait for the garbage collector to delete the pods Apr 22 21:58:27.991: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.216899ms Apr 22 21:58:28.092: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.333281ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:41.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4040" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.725 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:29.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:42.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7925" for this suite. • [SLOW TEST:13.101 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":13,"skipped":148,"failed":0} S ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:42.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Apr 22 21:58:42.915: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Apr 22 21:58:42.934: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:42.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-5521" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":14,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:37.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-8226/configmap-test-c2f187f5-bcbc-496b-b74d-b597660961f2 STEP: Creating a pod to test consume configMaps Apr 22 21:58:37.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a" in namespace "configmap-8226" to be "Succeeded or Failed" Apr 22 21:58:37.474: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437106ms Apr 22 21:58:39.477: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00736756s Apr 22 21:58:41.480: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010299441s Apr 22 21:58:43.486: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016480058s Apr 22 21:58:45.494: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02454649s Apr 22 21:58:47.498: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.028469706s STEP: Saw pod success Apr 22 21:58:47.498: INFO: Pod "pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a" satisfied condition "Succeeded or Failed" Apr 22 21:58:47.500: INFO: Trying to get logs from node node2 pod pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a container env-test: STEP: delete the pod Apr 22 21:58:47.514: INFO: Waiting for pod pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a to disappear Apr 22 21:58:47.516: INFO: Pod pod-configmaps-d2843950-f1a3-4657-9be8-2400167f3b5a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:47.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8226" for this suite. • [SLOW TEST:10.091 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:37.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-0ffd9ad4-e87b-4023-9abc-d3e568768fa5 STEP: Creating a pod to test consume secrets Apr 22 21:58:37.477: INFO: Waiting up to 5m0s for pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a" in namespace "secrets-3959" to be "Succeeded or Failed" Apr 22 21:58:37.479: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061555ms Apr 22 21:58:39.483: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005928032s Apr 22 21:58:41.487: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009797192s Apr 22 21:58:43.492: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014490058s Apr 22 21:58:45.495: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018361081s Apr 22 21:58:47.498: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021150173s STEP: Saw pod success Apr 22 21:58:47.498: INFO: Pod "pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a" satisfied condition "Succeeded or Failed" Apr 22 21:58:47.501: INFO: Trying to get logs from node node2 pod pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a container secret-volume-test: STEP: delete the pod Apr 22 21:58:47.518: INFO: Waiting for pod pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a to disappear Apr 22 21:58:47.520: INFO: Pod pod-secrets-b724c5d3-816a-4224-a608-d01313df1f1a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:47.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3959" for this suite. STEP: Destroying namespace "secret-namespace-1037" for this suite. • [SLOW TEST:10.108 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:47.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:47.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2978" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":82,"failed":0} [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:47.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Apr 22 21:58:47.558: INFO: created test-event-1 Apr 22 21:58:47.561: INFO: created test-event-2 Apr 22 21:58:47.564: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Apr 22 21:58:47.566: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Apr 22 21:58:47.578: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:47.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1242" for this suite. •S ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:43.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-55c95374-239f-4f20-9f50-1381e9a26512 STEP: Creating a pod to test consume secrets Apr 22 21:58:43.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547" in namespace "projected-7597" to be "Succeeded or Failed" Apr 22 21:58:43.086: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.609103ms Apr 22 21:58:45.090: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007322733s Apr 22 21:58:47.093: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01020763s Apr 22 21:58:49.097: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013654846s Apr 22 21:58:51.101: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018057259s STEP: Saw pod success Apr 22 21:58:51.101: INFO: Pod "pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547" satisfied condition "Succeeded or Failed" Apr 22 21:58:51.103: INFO: Trying to get logs from node node2 pod pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547 container secret-volume-test: STEP: delete the pod Apr 22 21:58:51.117: INFO: Waiting for pod pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547 to disappear Apr 22 21:58:51.119: INFO: Pod pod-projected-secrets-e11bcbe9-11e8-4683-87a2-3a1823f7b547 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:51.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7597" for this suite. • [SLOW TEST:8.079 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":185,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:47.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 22 21:58:47.642: INFO: Waiting up to 5m0s for pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857" in namespace "emptydir-6641" to be "Succeeded or Failed" Apr 22 21:58:47.644: INFO: Pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140696ms Apr 22 21:58:49.648: INFO: Pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005672515s Apr 22 21:58:51.653: INFO: Pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010298582s Apr 22 21:58:53.655: INFO: Pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013219471s STEP: Saw pod success Apr 22 21:58:53.655: INFO: Pod "pod-4ed08c6b-d599-4f44-a437-6aaaeb736857" satisfied condition "Succeeded or Failed" Apr 22 21:58:53.658: INFO: Trying to get logs from node node2 pod pod-4ed08c6b-d599-4f44-a437-6aaaeb736857 container test-container: STEP: delete the pod Apr 22 21:58:53.671: INFO: Waiting for pod pod-4ed08c6b-d599-4f44-a437-6aaaeb736857 to disappear Apr 22 21:58:53.673: INFO: Pod pod-4ed08c6b-d599-4f44-a437-6aaaeb736857 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:53.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6641" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":92,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:51.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 22 21:58:51.193: INFO: Waiting up to 5m0s for pod "downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9" in namespace "downward-api-6557" to be "Succeeded or Failed" Apr 22 21:58:51.196: INFO: Pod "downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.245638ms Apr 22 21:58:53.201: INFO: Pod "downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007507802s Apr 22 21:58:55.207: INFO: Pod "downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014331976s STEP: Saw pod success Apr 22 21:58:55.207: INFO: Pod "downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9" satisfied condition "Succeeded or Failed" Apr 22 21:58:55.210: INFO: Trying to get logs from node node1 pod downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9 container dapi-container: STEP: delete the pod Apr 22 21:58:55.223: INFO: Waiting for pod downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9 to disappear Apr 22 21:58:55.225: INFO: Pod downward-api-8ee2b5a0-21a5-4908-be45-60a7c62497e9 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:58:55.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6557" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:47.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-9ba3d87c-cf88-45f6-8a63-0750d84b1dd3 in namespace container-probe-7917 Apr 22 21:58:53.637: INFO: Started pod liveness-9ba3d87c-cf88-45f6-8a63-0750d84b1dd3 in namespace container-probe-7917 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:58:53.639: INFO: Initial restart count of pod liveness-9ba3d87c-cf88-45f6-8a63-0750d84b1dd3 is 0 Apr 22 21:59:09.677: INFO: Restart count of pod container-probe-7917/liveness-9ba3d87c-cf88-45f6-8a63-0750d84b1dd3 is now 1 (16.037562672s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:09.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7917" for this suite. • [SLOW TEST:22.088 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:53.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Apr 22 21:58:53.719: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Apr 22 21:58:53.884: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 22 21:58:55.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:58:57.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:58:59.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:59:01.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:59:03.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:59:05.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261533, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 21:59:09.134: INFO: Waited 1.211792821s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Apr 22 21:59:09.540: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:10.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-675" for this suite. • [SLOW TEST:16.733 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":12,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-60128e5f-c125-4f45-bea5-624c505a446a STEP: Creating secret with name s-test-opt-upd-f99ac6a2-2c3e-4351-83ec-e5e226628838 STEP: Creating the pod Apr 22 21:57:28.999: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:31.002: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:33.002: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:35.003: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:37.002: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:39.002: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:41.004: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:43.003: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:57:45.003: INFO: The status of Pod pod-secrets-281d4cbf-998d-4a8b-b385-ffaba7aa6dd8 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-60128e5f-c125-4f45-bea5-624c505a446a STEP: Updating secret s-test-opt-upd-f99ac6a2-2c3e-4351-83ec-e5e226628838 STEP: Creating secret with name s-test-opt-create-1920afb0-0833-489a-a518-d7e1705affea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:14.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5579" for this suite. • [SLOW TEST:105.274 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":101,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:09.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-d3d71d22-90dd-4153-b300-3b0604e2e8f5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:15.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3810" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:10.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 22 21:59:10.497: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:16.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5415" for this suite. • [SLOW TEST:5.654 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":13,"skipped":116,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:23.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 22 21:58:23.439: INFO: PodSpec: initContainers in spec.initContainers Apr 22 21:59:18.156: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d8dc07f0-241a-4185-9cfa-db128f87752a", GenerateName:"", Namespace:"init-container-9013", SelfLink:"", UID:"821abc82-2fc4-4b0f-9837-3afffd9a7e57", ResourceVersion:"35093", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63786261503, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"439375415"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.30\"\n ],\n \"mac\": \"02:07:17:57:4c:83\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.30\"\n ],\n \"mac\": \"02:07:17:57:4c:83\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00501f9b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00501f9c8)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00501f9e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00501f9f8)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00501fa10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00501fa40)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-gfr9r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000de5820), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-gfr9r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-gfr9r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-gfr9r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031b7d28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004056930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031b7db0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031b7dd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0031b7dd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0031b7ddc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0004a3960), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261503, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261503, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261503, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261503, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.4.30", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.4.30"}}, StartTime:(*v1.Time)(0xc00501fa70), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004056a80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004056af0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://f9f6d640c01fc071a5d56705d1b7b27c2bd08b3e059bc0a74c211b83c3856edb", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000de59a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000de5980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031b7e5f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:18.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9013" for this suite. • [SLOW TEST:54.745 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:14.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:59:14.180: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-edf31a9d-f3a9-43a4-aa96-1368d7b0a520" in namespace "security-context-test-8554" to be "Succeeded or Failed" Apr 22 21:59:14.182: INFO: Pod "alpine-nnp-false-edf31a9d-f3a9-43a4-aa96-1368d7b0a520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125464ms Apr 22 21:59:16.185: INFO: Pod "alpine-nnp-false-edf31a9d-f3a9-43a4-aa96-1368d7b0a520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005023659s Apr 22 21:59:18.188: INFO: Pod "alpine-nnp-false-edf31a9d-f3a9-43a4-aa96-1368d7b0a520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007975874s Apr 22 21:59:18.188: INFO: Pod "alpine-nnp-false-edf31a9d-f3a9-43a4-aa96-1368d7b0a520" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:18.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8554" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:16.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-6555f611-d1f1-4009-bf8b-1f53cbfab1ac STEP: Creating a pod to test consume configMaps Apr 22 21:59:16.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0" in namespace "configmap-8176" to be "Succeeded or Failed" Apr 22 21:59:16.193: INFO: Pod "pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014115ms Apr 22 21:59:18.196: INFO: Pod "pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005050579s Apr 22 21:59:20.200: INFO: Pod "pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009121454s STEP: Saw pod success Apr 22 21:59:20.200: INFO: Pod "pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0" satisfied condition "Succeeded or Failed" Apr 22 21:59:20.202: INFO: Trying to get logs from node node2 pod pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0 container agnhost-container: STEP: delete the pod Apr 22 21:59:20.342: INFO: Waiting for pod pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0 to disappear Apr 22 21:59:20.344: INFO: Pod pod-configmaps-b25f018f-d71e-420e-b40a-96f0b944adb0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:20.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8176" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":128,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":158,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:18.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Apr 22 21:59:18.202: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:20.205: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:22.205: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:23.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8996" for this suite. • [SLOW TEST:5.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":158,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:18.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 21:59:18.689: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 21:59:20.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261558, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261558, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261558, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261558, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 21:59:23.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 22 21:59:23.725: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:23.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5385" for this suite. STEP: Destroying namespace "webhook-5385-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.515 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:20.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:59:20.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b" in namespace "downward-api-3935" to be "Succeeded or Failed" Apr 22 21:59:20.407: INFO: Pod "downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422016ms Apr 22 21:59:22.411: INFO: Pod "downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007216549s Apr 22 21:59:24.415: INFO: Pod "downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011218303s STEP: Saw pod success Apr 22 21:59:24.415: INFO: Pod "downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b" satisfied condition "Succeeded or Failed" Apr 22 21:59:24.418: INFO: Trying to get logs from node node2 pod downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b container client-container: STEP: delete the pod Apr 22 21:59:24.429: INFO: Waiting for pod downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b to disappear Apr 22 21:59:24.431: INFO: Pod downwardapi-volume-b64d14e3-d30e-491d-b26f-42d47fe7eb1b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:24.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3935" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":136,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:24.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Apr 22 21:59:24.479: INFO: Waiting up to 5m0s for pod "client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30" in namespace "containers-2006" to be "Succeeded or Failed" Apr 22 21:59:24.481: INFO: Pod "client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053925ms Apr 22 21:59:26.486: INFO: Pod "client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006297627s Apr 22 21:59:28.489: INFO: Pod "client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010162948s STEP: Saw pod success Apr 22 21:59:28.489: INFO: Pod "client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30" satisfied condition "Succeeded or Failed" Apr 22 21:59:28.492: INFO: Trying to get logs from node node1 pod client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30 container agnhost-container: STEP: delete the pod Apr 22 21:59:28.505: INFO: Waiting for pod client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30 to disappear Apr 22 21:59:28.507: INFO: Pod client-containers-f428fd4c-6a4a-4003-96b5-1a8986b7af30 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:28.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2006" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":139,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:28.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:28.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9294" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":17,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:23.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:23.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-5030 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:29.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-6457" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:29.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5030" for this suite. • [SLOW TEST:6.090 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":9,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:30.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:30.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2573" for this suite. • [SLOW TEST:60.050 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":129,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:28.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-cae54052-d5b5-4a64-9b35-7a5cd4dce948 STEP: Creating a pod to test consume configMaps Apr 22 21:59:28.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902" in namespace "projected-7689" to be "Succeeded or Failed" Apr 22 21:59:28.663: INFO: Pod "pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115116ms Apr 22 21:59:30.666: INFO: Pod "pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005306954s Apr 22 21:59:32.669: INFO: Pod "pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008792584s STEP: Saw pod success Apr 22 21:59:32.669: INFO: Pod "pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902" satisfied condition "Succeeded or Failed" Apr 22 21:59:32.672: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902 container agnhost-container: STEP: delete the pod Apr 22 21:59:32.685: INFO: Waiting for pod pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902 to disappear Apr 22 21:59:32.688: INFO: Pod pod-projected-configmaps-59508fff-fa4a-4235-8df9-2ca0cd8c6902 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:32.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7689" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":163,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:55.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 22 21:58:55.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7668 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Apr 22 21:58:55.518: INFO: stderr: "" Apr 22 21:58:55.518: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Apr 22 21:58:55.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7668 delete pods e2e-test-httpd-pod' Apr 22 21:59:34.644: INFO: stderr: "" Apr 22 21:59:34.644: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:34.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7668" for this suite. • [SLOW TEST:39.315 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":17,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:34.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:34.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4292" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":18,"skipped":273,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:30.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Apr 22 21:59:30.729: INFO: Waiting up to 5m0s for pod "var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd" in namespace "var-expansion-7457" to be "Succeeded or Failed" Apr 22 21:59:30.731: INFO: Pod "var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162625ms Apr 22 21:59:32.734: INFO: Pod "var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0057023s Apr 22 21:59:34.741: INFO: Pod "var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012026491s STEP: Saw pod success Apr 22 21:59:34.741: INFO: Pod "var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd" satisfied condition "Succeeded or Failed" Apr 22 21:59:34.746: INFO: Trying to get logs from node node2 pod var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd container dapi-container: STEP: delete the pod Apr 22 21:59:34.767: INFO: Waiting for pod var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd to disappear Apr 22 21:59:34.769: INFO: Pod var-expansion-05a38d2e-47ae-4d46-861b-5098f7b946bd no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:34.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7457" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":10,"skipped":144,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:34.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 21:59:34.811: INFO: The status of Pod pod-secrets-54d73187-41d6-4344-a7e4-60f4bf03f710 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:36.815: INFO: The status of Pod pod-secrets-54d73187-41d6-4344-a7e4-60f4bf03f710 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:38.815: INFO: The status of Pod pod-secrets-54d73187-41d6-4344-a7e4-60f4bf03f710 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:38.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9422" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":19,"skipped":280,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:34.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-9e8c36d0-40e9-4824-939d-54e5261a451e STEP: Creating a pod to test consume configMaps Apr 22 21:59:34.821: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01" in namespace "projected-8641" to be "Succeeded or Failed" Apr 22 21:59:34.824: INFO: Pod "pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447553ms Apr 22 21:59:36.827: INFO: Pod "pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00544177s Apr 22 21:59:38.829: INFO: Pod "pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007717514s STEP: Saw pod success Apr 22 21:59:38.829: INFO: Pod "pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01" satisfied condition "Succeeded or Failed" Apr 22 21:59:38.832: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01 container agnhost-container: STEP: delete the pod Apr 22 21:59:38.844: INFO: Waiting for pod pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01 to disappear Apr 22 21:59:38.845: INFO: Pod pod-projected-configmaps-527a20a1-e01e-4286-980a-07d12e632c01 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:38.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8641" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:38.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 21:59:38.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d" in namespace "projected-5217" to be "Succeeded or Failed" Apr 22 21:59:38.923: INFO: Pod "downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083573ms Apr 22 21:59:40.926: INFO: Pod "downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004467224s Apr 22 21:59:42.930: INFO: Pod "downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008215661s STEP: Saw pod success Apr 22 21:59:42.930: INFO: Pod "downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d" satisfied condition "Succeeded or Failed" Apr 22 21:59:42.933: INFO: Trying to get logs from node node1 pod downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d container client-container: STEP: delete the pod Apr 22 21:59:42.945: INFO: Waiting for pod downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d to disappear Apr 22 21:59:42.947: INFO: Pod downwardapi-volume-eb6123b9-b640-477f-a15f-deaf8990754d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:42.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5217" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:43.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-fc11869d-fd64-446e-bc1d-23f1626aa476 STEP: Creating a pod to test consume secrets Apr 22 21:59:43.196: INFO: Waiting up to 5m0s for pod "pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead" in namespace "secrets-2319" to be "Succeeded or Failed" Apr 22 21:59:43.198: INFO: Pod "pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090045ms Apr 22 21:59:45.219: INFO: Pod "pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022749977s Apr 22 21:59:47.222: INFO: Pod "pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025878289s STEP: Saw pod success Apr 22 21:59:47.222: INFO: Pod "pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead" satisfied condition "Succeeded or Failed" Apr 22 21:59:47.224: INFO: Trying to get logs from node node1 pod pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead container secret-volume-test: STEP: delete the pod Apr 22 21:59:47.237: INFO: Waiting for pod pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead to disappear Apr 22 21:59:47.241: INFO: Pod pod-secrets-bbdaca52-27dc-4c4e-8816-ebbc12b11ead no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:47.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2319" for this suite. • ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:32.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 22 21:59:32.746: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:34.753: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:36.751: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 22 21:59:36.766: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:38.771: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:40.770: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 22 21:59:40.783: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 21:59:40.786: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 21:59:42.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 21:59:42.790: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 21:59:44.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 21:59:44.790: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 21:59:46.788: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 21:59:46.792: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 21:59:48.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 21:59:48.790: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:48.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4353" for this suite. • [SLOW TEST:16.087 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":169,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:48.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-07030b84-0413-4f7e-8c3d-12801550f3e5 STEP: Creating a pod to test consume configMaps Apr 22 21:59:48.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-d205e106-a09a-4016-af89-7199910629f2" in namespace "configmap-8173" to be "Succeeded or Failed" Apr 22 21:59:48.856: INFO: Pod "pod-configmaps-d205e106-a09a-4016-af89-7199910629f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831446ms Apr 22 21:59:50.860: INFO: Pod "pod-configmaps-d205e106-a09a-4016-af89-7199910629f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007849911s Apr 22 21:59:52.865: INFO: Pod "pod-configmaps-d205e106-a09a-4016-af89-7199910629f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012539465s STEP: Saw pod success Apr 22 21:59:52.865: INFO: Pod "pod-configmaps-d205e106-a09a-4016-af89-7199910629f2" satisfied condition "Succeeded or Failed" Apr 22 21:59:52.868: INFO: Trying to get logs from node node2 pod pod-configmaps-d205e106-a09a-4016-af89-7199910629f2 container configmap-volume-test: STEP: delete the pod Apr 22 21:59:52.879: INFO: Waiting for pod pod-configmaps-d205e106-a09a-4016-af89-7199910629f2 to disappear Apr 22 21:59:52.881: INFO: Pod pod-configmaps-d205e106-a09a-4016-af89-7199910629f2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:52.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8173" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":175,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:29.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-75t6 STEP: Creating a pod to test atomic-volume-subpath Apr 22 21:59:29.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-75t6" in namespace "subpath-8100" to be "Succeeded or Failed" Apr 22 21:59:29.463: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02511ms Apr 22 21:59:31.469: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007557537s Apr 22 21:59:33.473: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 4.01198077s Apr 22 21:59:35.478: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 6.017144948s Apr 22 21:59:37.482: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 8.021066179s Apr 22 21:59:39.488: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 10.02687359s Apr 22 21:59:41.493: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 12.032072853s Apr 22 21:59:43.498: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 14.036687082s Apr 22 21:59:45.503: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 16.042084835s Apr 22 21:59:47.507: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 18.04590837s Apr 22 21:59:49.512: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 20.050962112s Apr 22 21:59:51.519: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Running", Reason="", readiness=true. Elapsed: 22.057702857s Apr 22 21:59:53.524: INFO: Pod "pod-subpath-test-configmap-75t6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063122068s STEP: Saw pod success Apr 22 21:59:53.524: INFO: Pod "pod-subpath-test-configmap-75t6" satisfied condition "Succeeded or Failed" Apr 22 21:59:53.528: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-75t6 container test-container-subpath-configmap-75t6: STEP: delete the pod Apr 22 21:59:53.541: INFO: Waiting for pod pod-subpath-test-configmap-75t6 to disappear Apr 22 21:59:53.543: INFO: Pod pod-subpath-test-configmap-75t6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-75t6 Apr 22 21:59:53.543: INFO: Deleting pod "pod-subpath-test-configmap-75t6" in namespace "subpath-8100" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 21:59:53.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8100" for this suite. • [SLOW TEST:24.142 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:52.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4427.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4427.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4427.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4427.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4427.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4427.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:00:06.947: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local from pod dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304: the server could not find the requested resource (get pods dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304) Apr 22 22:00:06.956: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4427.svc.cluster.local from pod dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304: the server could not find the requested resource (get pods dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304) Apr 22 22:00:06.967: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local from pod dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304: the server could not find the requested resource (get pods dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304) Apr 22 22:00:06.969: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4427.svc.cluster.local from pod dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304: the server could not find the requested resource (get pods dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304) Apr 22 22:00:06.972: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4427.svc.cluster.local from pod dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304: the server could not find the requested resource (get pods dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304) Apr 22 22:00:06.977: INFO: Lookups using dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4427.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4427.svc.cluster.local jessie_udp@dns-test-service-2.dns-4427.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4427.svc.cluster.local] Apr 22 22:00:12.014: INFO: DNS probes using dns-4427/dns-test-75fd7962-22c1-4a99-8016-fad0cd5a1304 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:12.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4427" for this suite. • [SLOW TEST:19.138 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":21,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:12.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Apr 22 22:00:12.110: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-168 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:12.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-168" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":22,"skipped":199,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:12.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Apr 22 22:00:12.258: INFO: created test-pod-1 Apr 22 22:00:12.268: INFO: created test-pod-2 Apr 22 22:00:12.277: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-502" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":23,"skipped":200,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:41.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-92fa6d4c-a20c-4695-8bd4-9f394a7fc827 STEP: Creating configMap with name cm-test-opt-upd-0bbb53ed-014f-4cfd-9453-79ea10790624 STEP: Creating the pod Apr 22 21:58:41.472: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:43.476: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:45.478: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:47.475: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:49.476: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:58:51.477: INFO: The status of Pod pod-configmaps-f800d567-7afd-4a29-8c03-37d0d72c85bb is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-92fa6d4c-a20c-4695-8bd4-9f394a7fc827 STEP: Updating configmap cm-test-opt-upd-0bbb53ed-014f-4cfd-9453-79ea10790624 STEP: Creating configMap with name cm-test-opt-create-8c867fb3-2894-464e-a728-30f984967213 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:22.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2846" for this suite. • [SLOW TEST:101.050 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:53.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4876 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-4876 Apr 22 21:59:53.630: INFO: Found 0 stateful pods, waiting for 1 Apr 22 22:00:03.634: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:00:03.661: INFO: Deleting all statefulset in ns statefulset-4876 Apr 22 22:00:03.663: INFO: Scaling statefulset ss to 0 Apr 22 22:00:23.678: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:00:23.680: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4876" for this suite. • [SLOW TEST:30.099 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":11,"skipped":222,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:12.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Apr 22 22:00:12.357: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.357: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.361: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.362: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.368: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.368: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.390: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:12.390: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 22:00:16.923: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 22 22:00:16.923: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 22 22:00:16.929: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Apr 22 22:00:16.935: INFO: observed event type ADDED STEP: waiting for Replicas to scale Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.936: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 0 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.937: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.940: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.940: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.947: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.947: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:16.952: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:16.952: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:16.964: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:16.964: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:20.862: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:20.862: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:20.878: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 STEP: listing Deployments Apr 22 22:00:20.883: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Apr 22 22:00:20.895: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Apr 22 22:00:20.903: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:20.903: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:20.913: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:20.921: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:20.927: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:24.012: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:24.020: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:24.027: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:24.041: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 22:00:26.558: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Apr 22 22:00:26.580: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 1 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 2 Apr 22 22:00:26.581: INFO: observed Deployment test-deployment in namespace deployment-3994 with ReadyReplicas 3 STEP: deleting the Deployment Apr 22 22:00:26.587: INFO: observed event type MODIFIED Apr 22 22:00:26.587: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED Apr 22 22:00:26.588: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:00:26.590: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:26.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3994" for this suite. • [SLOW TEST:14.274 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":24,"skipped":208,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:38.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1865, will wait for the garbage collector to delete the pods Apr 22 21:59:42.992: INFO: Deleting Job.batch foo took: 4.432269ms Apr 22 21:59:43.092: INFO: Terminating Job.batch foo pods took: 100.125508ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:28.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1865" for this suite. • [SLOW TEST:49.298 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":12,"skipped":172,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:23.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:00:23.738: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 22 22:00:28.742: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 22:00:30.752: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:00:30.766: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3225 c880ae7f-ff2a-4fe3-a5a2-f76aed65dc9a 36538 1 2022-04-22 22:00:30 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-04-22 22:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050ff0b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 22:00:30.768: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-3225 ea3fc61e-8b1d-4d05-b91d-a6697a9ec589 36540 1 2022-04-22 22:00:30 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c880ae7f-ff2a-4fe3-a5a2-f76aed65dc9a 0xc0050ff4e7 0xc0050ff4e8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c880ae7f-ff2a-4fe3-a5a2-f76aed65dc9a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050ff578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:00:30.768: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 22 22:00:30.769: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3225 e41e39dc-c769-419f-a355-2f9d92f20ee8 36539 1 2022-04-22 22:00:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c880ae7f-ff2a-4fe3-a5a2-f76aed65dc9a 0xc0050ff3d7 0xc0050ff3d8}] [] [{e2e.test Update apps/v1 2022-04-22 22:00:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"c880ae7f-ff2a-4fe3-a5a2-f76aed65dc9a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0050ff478 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:00:30.772: INFO: Pod "test-cleanup-controller-2vpgb" is available: &Pod{ObjectMeta:{test-cleanup-controller-2vpgb test-cleanup-controller- deployment-3225 32a9099f-a289-43ee-8081-7e1f114cf732 36526 0 2022-04-22 22:00:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.59" ], "mac": "5a:a0:4a:79:b7:db", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.59" ], "mac": "5a:a0:4a:79:b7:db", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller e41e39dc-c769-419f-a355-2f9d92f20ee8 0xc0050ff9a7 0xc0050ff9a8}] [] [{kube-controller-manager Update v1 2022-04-22 22:00:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e41e39dc-c769-419f-a355-2f9d92f20ee8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:00:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ztg8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztg8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:00:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:00:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:00:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:00:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.59,StartTime:2022-04-22 22:00:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:00:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://85f935de9e4c9ea02af9a74e46506acded1e125b06010258982d841f40c1d980,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:30.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3225" for this suite. • [SLOW TEST:7.066 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":12,"skipped":228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:23.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-ef0585e0-11db-489d-ac4f-5db81dcc95eb STEP: Creating secret with name s-test-opt-upd-5184409f-1914-4b3e-9aa1-351b07333a60 STEP: Creating the pod Apr 22 21:59:23.869: INFO: The status of Pod pod-projected-secrets-547b63b3-0ea6-4552-b534-51ebe3296aa7 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:25.872: INFO: The status of Pod pod-projected-secrets-547b63b3-0ea6-4552-b534-51ebe3296aa7 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:27.873: INFO: The status of Pod pod-projected-secrets-547b63b3-0ea6-4552-b534-51ebe3296aa7 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:29.873: INFO: The status of Pod pod-projected-secrets-547b63b3-0ea6-4552-b534-51ebe3296aa7 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-ef0585e0-11db-489d-ac4f-5db81dcc95eb STEP: Updating secret s-test-opt-upd-5184409f-1914-4b3e-9aa1-351b07333a60 STEP: Creating secret with name s-test-opt-create-ccfcccb3-d9c5-43c6-aaf9-f912a0539ebc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:36.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3415" for this suite. • [SLOW TEST:73.097 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:28.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Apr 22 22:00:28.268: INFO: observed Pod pod-test in namespace pods-1829 in phase Pending with labels: map[test-pod-static:true] & conditions [] Apr 22 22:00:28.270: INFO: observed Pod pod-test in namespace pods-1829 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC }] Apr 22 22:00:29.166: INFO: observed Pod pod-test in namespace pods-1829 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC }] Apr 22 22:00:32.074: INFO: observed Pod pod-test in namespace pods-1829 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC }] Apr 22 22:00:37.897: INFO: observed Pod pod-test in namespace pods-1829 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC }] Apr 22 22:00:38.821: INFO: Found Pod pod-test in namespace pods-1829 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:00:28 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Apr 22 22:00:38.832: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Apr 22 22:00:38.854: INFO: observed event type ADDED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.854: INFO: observed event type MODIFIED Apr 22 22:00:38.855: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:38.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1829" for this suite. • [SLOW TEST:10.638 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":13,"skipped":179,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:22.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3891 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3891;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3891 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3891;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3891.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3891.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3891.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3891.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3891.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3891.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3891.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3891.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3891.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.53.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.53.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.53.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.53.242_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3891 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3891;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3891 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3891;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3891.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3891.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3891.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3891.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3891.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3891.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3891.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3891.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3891.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3891.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.53.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.53.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.53.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.53.242_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:00:36.628: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-3891 from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.636: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3891 from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.638: INFO: Unable to read wheezy_udp@dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.640: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.663: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.666: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.668: INFO: Unable to read jessie_udp@dns-test-service.dns-3891 from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.670: INFO: Unable to read jessie_tcp@dns-test-service.dns-3891 from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.672: INFO: Unable to read jessie_udp@dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.677: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3891.svc from pod dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea: the server could not find the requested resource (get pods dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea) Apr 22 22:00:36.694: INFO: Lookups using dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3891 wheezy_tcp@dns-test-service.dns-3891 wheezy_udp@dns-test-service.dns-3891.svc wheezy_tcp@dns-test-service.dns-3891.svc wheezy_udp@_http._tcp.dns-test-service.dns-3891.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3891.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3891 jessie_tcp@dns-test-service.dns-3891 jessie_udp@dns-test-service.dns-3891.svc jessie_tcp@dns-test-service.dns-3891.svc jessie_udp@_http._tcp.dns-test-service.dns-3891.svc jessie_tcp@_http._tcp.dns-test-service.dns-3891.svc] Apr 22 22:00:41.760: INFO: DNS probes using dns-3891/dns-test-f1eee820-fdd6-4653-8ec1-8efba172c2ea succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:41.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3891" for this suite. • [SLOW TEST:19.230 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":156,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:36.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:00:37.370: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:00:39.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261637, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261637, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261637, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261637, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:00:42.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:42.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3899" for this suite. STEP: Destroying namespace "webhook-3899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.575 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:38.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 22 22:00:38.916: INFO: Waiting up to 5m0s for pod "downward-api-afee39d4-9187-487d-a293-5dc88d8d3099" in namespace "downward-api-8089" to be "Succeeded or Failed" Apr 22 22:00:38.918: INFO: Pod "downward-api-afee39d4-9187-487d-a293-5dc88d8d3099": Phase="Pending", Reason="", readiness=false. Elapsed: 1.890539ms Apr 22 22:00:40.921: INFO: Pod "downward-api-afee39d4-9187-487d-a293-5dc88d8d3099": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005109407s Apr 22 22:00:42.925: INFO: Pod "downward-api-afee39d4-9187-487d-a293-5dc88d8d3099": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008721114s STEP: Saw pod success Apr 22 22:00:42.925: INFO: Pod "downward-api-afee39d4-9187-487d-a293-5dc88d8d3099" satisfied condition "Succeeded or Failed" Apr 22 22:00:42.927: INFO: Trying to get logs from node node1 pod downward-api-afee39d4-9187-487d-a293-5dc88d8d3099 container dapi-container: STEP: delete the pod Apr 22 22:00:43.097: INFO: Waiting for pod downward-api-afee39d4-9187-487d-a293-5dc88d8d3099 to disappear Apr 22 22:00:43.100: INFO: Pod downward-api-afee39d4-9187-487d-a293-5dc88d8d3099 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:43.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8089" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":187,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:43.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:43.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8355" for this suite. • ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:26.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5932 STEP: creating service affinity-clusterip in namespace services-5932 STEP: creating replication controller affinity-clusterip in namespace services-5932 I0422 22:00:26.649582 32 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5932, replica count: 3 I0422 22:00:29.700456 32 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:00:32.701587 32 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:00:32.707: INFO: Creating new exec pod Apr 22 22:00:37.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5932 exec execpod-affinityvqwps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 22 22:00:38.031: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 22 22:00:38.032: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:00:38.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5932 exec execpod-affinityvqwps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.32.87 80' Apr 22 22:00:38.272: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.32.87 80\nConnection to 10.233.32.87 80 port [tcp/http] succeeded!\n" Apr 22 22:00:38.272: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:00:38.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5932 exec execpod-affinityvqwps -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.32.87:80/ ; done' Apr 22 22:00:38.588: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.32.87:80/\n" Apr 22 22:00:38.588: INFO: stdout: "\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz\naffinity-clusterip-6vfbz" Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Received response from host: affinity-clusterip-6vfbz Apr 22 22:00:38.588: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5932, will wait for the garbage collector to delete the pods Apr 22 22:00:38.653: INFO: Deleting ReplicationController affinity-clusterip took: 4.844173ms Apr 22 22:00:38.753: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.588242ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:47.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5932" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.753 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":215,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:42.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-95e065eb-bc3b-46a0-8ae1-2ec3f256cb23 STEP: Creating a pod to test consume configMaps Apr 22 22:00:42.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44" in namespace "projected-4988" to be "Succeeded or Failed" Apr 22 22:00:42.643: INFO: Pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.000189ms Apr 22 22:00:44.648: INFO: Pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657872s Apr 22 22:00:46.657: INFO: Pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016646752s Apr 22 22:00:48.660: INFO: Pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019467522s STEP: Saw pod success Apr 22 22:00:48.660: INFO: Pod "pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44" satisfied condition "Succeeded or Failed" Apr 22 22:00:48.663: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44 container agnhost-container: STEP: delete the pod Apr 22 22:00:48.678: INFO: Waiting for pod pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44 to disappear Apr 22 22:00:48.680: INFO: Pod pod-projected-configmaps-8606700b-7465-49e4-b4c4-c3bd84ffdd44 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:48.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4988" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":117,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:48.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 22 22:00:48.721: INFO: Waiting up to 5m0s for pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa" in namespace "emptydir-4685" to be "Succeeded or Failed" Apr 22 22:00:48.724: INFO: Pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.999716ms Apr 22 22:00:50.728: INFO: Pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006480794s Apr 22 22:00:52.731: INFO: Pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009209173s Apr 22 22:00:54.736: INFO: Pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014256565s STEP: Saw pod success Apr 22 22:00:54.736: INFO: Pod "pod-fe541a92-1e2b-4973-a5df-885fbfb172aa" satisfied condition "Succeeded or Failed" Apr 22 22:00:54.738: INFO: Trying to get logs from node node2 pod pod-fe541a92-1e2b-4973-a5df-885fbfb172aa container test-container: STEP: delete the pod Apr 22 22:00:54.754: INFO: Waiting for pod pod-fe541a92-1e2b-4973-a5df-885fbfb172aa to disappear Apr 22 22:00:54.756: INFO: Pod pod-fe541a92-1e2b-4973-a5df-885fbfb172aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:54.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4685" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":117,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:54.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Apr 22 22:00:54.819: INFO: Found Service test-service-mv58h in namespace services-5078 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Apr 22 22:00:54.819: INFO: Service test-service-mv58h created STEP: Getting /status Apr 22 22:00:54.822: INFO: Service test-service-mv58h has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Apr 22 22:00:54.826: INFO: observed Service test-service-mv58h in namespace services-5078 with annotations: map[] & LoadBalancer: {[]} Apr 22 22:00:54.826: INFO: Found Service test-service-mv58h in namespace services-5078 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Apr 22 22:00:54.826: INFO: Service test-service-mv58h has service status patched STEP: updating the ServiceStatus Apr 22 22:00:54.833: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Apr 22 22:00:54.834: INFO: Observed Service test-service-mv58h in namespace services-5078 with annotations: map[] & Conditions: {[]} Apr 22 22:00:54.834: INFO: Observed event: &Service{ObjectMeta:{test-service-mv58h services-5078 7e93fbb8-8eaf-4dac-874a-606001e1e8dd 37240 0 2022-04-22 22:00:54 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-04-22 22:00:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.23.172,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.23.172],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Apr 22 22:00:54.834: INFO: Found Service test-service-mv58h in namespace services-5078 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 22 22:00:54.834: INFO: Service test-service-mv58h has service status updated STEP: patching the service STEP: watching for the Service to be patched Apr 22 22:00:54.844: INFO: observed Service test-service-mv58h in namespace services-5078 with labels: map[test-service-static:true] Apr 22 22:00:54.844: INFO: observed Service test-service-mv58h in namespace services-5078 with labels: map[test-service-static:true] Apr 22 22:00:54.845: INFO: observed Service test-service-mv58h in namespace services-5078 with labels: map[test-service-static:true] Apr 22 22:00:54.845: INFO: Found Service test-service-mv58h in namespace services-5078 with labels: map[test-service:patched test-service-static:true] Apr 22 22:00:54.845: INFO: Service test-service-mv58h patched STEP: deleting the service STEP: watching for the Service to be deleted Apr 22 22:00:54.855: INFO: Observed event: ADDED Apr 22 22:00:54.855: INFO: Observed event: MODIFIED Apr 22 22:00:54.855: INFO: Observed event: MODIFIED Apr 22 22:00:54.855: INFO: Observed event: MODIFIED Apr 22 22:00:54.855: INFO: Found Service test-service-mv58h in namespace services-5078 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Apr 22 22:00:54.855: INFO: Service test-service-mv58h deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:54.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5078" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":9,"skipped":126,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:47.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 22 22:00:47.404: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:00:56.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2413" for this suite. • [SLOW TEST:9.067 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":26,"skipped":218,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":192,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:43.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:00:43.682: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:00:45.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:00:47.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261643, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:00:50.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:00.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6987" for this suite. STEP: Destroying namespace "webhook-6987-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.629 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":16,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:47.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-42f2ab23-13ae-4a8d-8661-f26b0b125ef5 STEP: Creating the pod Apr 22 21:59:47.296: INFO: The status of Pod pod-configmaps-7764275c-72d2-4f31-97ec-1c8f03990bf1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:49.299: INFO: The status of Pod pod-configmaps-7764275c-72d2-4f31-97ec-1c8f03990bf1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 21:59:51.303: INFO: The status of Pod pod-configmaps-7764275c-72d2-4f31-97ec-1c8f03990bf1 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-42f2ab23-13ae-4a8d-8661-f26b0b125ef5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:01.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-864" for this suite. • [SLOW TEST:74.388 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:56.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-f2557ff0-ea10-41d0-87d5-410cdcce6a36 STEP: Creating configMap with name cm-test-opt-upd-620a7e4c-b873-4cad-9164-c4cc71dad002 STEP: Creating the pod Apr 22 22:00:56.522: INFO: The status of Pod pod-projected-configmaps-b5a6cf4c-0411-43cf-9ba1-27ca150da3e4 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:00:58.527: INFO: The status of Pod pod-projected-configmaps-b5a6cf4c-0411-43cf-9ba1-27ca150da3e4 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:00.529: INFO: The status of Pod pod-projected-configmaps-b5a6cf4c-0411-43cf-9ba1-27ca150da3e4 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:02.526: INFO: The status of Pod pod-projected-configmaps-b5a6cf4c-0411-43cf-9ba1-27ca150da3e4 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-f2557ff0-ea10-41d0-87d5-410cdcce6a36 STEP: Updating configmap cm-test-opt-upd-620a7e4c-b873-4cad-9164-c4cc71dad002 STEP: Creating configMap with name cm-test-opt-create-a84f273e-3ab9-49b1-857a-69fb3e04187e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:06.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4131" for this suite. • [SLOW TEST:10.141 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:00.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Apr 22 22:01:00.913: INFO: namespace kubectl-3671 Apr 22 22:01:00.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3671 create -f -' Apr 22 22:01:01.309: INFO: stderr: "" Apr 22 22:01:01.309: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 22 22:01:02.315: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:01:02.315: INFO: Found 0 / 1 Apr 22 22:01:03.314: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:01:03.314: INFO: Found 0 / 1 Apr 22 22:01:04.314: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:01:04.314: INFO: Found 0 / 1 Apr 22 22:01:05.313: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:01:05.313: INFO: Found 1 / 1 Apr 22 22:01:05.313: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 22:01:05.316: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:01:05.316: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 22:01:05.316: INFO: wait on agnhost-primary startup in kubectl-3671 Apr 22 22:01:05.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3671 logs agnhost-primary-kwjz5 agnhost-primary' Apr 22 22:01:05.480: INFO: stderr: "" Apr 22 22:01:05.481: INFO: stdout: "Paused\n" STEP: exposing RC Apr 22 22:01:05.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3671 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 22 22:01:05.688: INFO: stderr: "" Apr 22 22:01:05.688: INFO: stdout: "service/rm2 exposed\n" Apr 22 22:01:05.691: INFO: Service rm2 in namespace kubectl-3671 found. STEP: exposing service Apr 22 22:01:07.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3671 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 22 22:01:07.907: INFO: stderr: "" Apr 22 22:01:07.907: INFO: stdout: "service/rm3 exposed\n" Apr 22 22:01:07.910: INFO: Service rm3 in namespace kubectl-3671 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:09.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3671" for this suite. • [SLOW TEST:9.032 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":17,"skipped":220,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:54.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:00:54.925: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:00:56.930: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:00:58.928: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:00.930: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:02.930: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:04.930: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:06.929: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:08.929: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:10.929: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:12.930: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:14.933: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = false) Apr 22 22:01:16.929: INFO: The status of Pod test-webserver-bee61cdc-4ccd-433f-966e-87b4add84507 is Running (Ready = true) Apr 22 22:01:16.932: INFO: Container started at 2022-04-22 22:00:57 +0000 UTC, pod became ready at 2022-04-22 22:01:14 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:16.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5652" for this suite. • [SLOW TEST:22.051 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":136,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:01.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:05.717: INFO: Deleting pod "var-expansion-1f78cdc0-0296-4c83-8ec7-8e645bb636d3" in namespace "var-expansion-5753" Apr 22 22:01:05.721: INFO: Wait up to 5m0s for pod "var-expansion-1f78cdc0-0296-4c83-8ec7-8e645bb636d3" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:19.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5753" for this suite. • [SLOW TEST:18.057 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":23,"skipped":421,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:06.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:01:07.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:01:09.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261667, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261667, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261667, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261667, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:01:12.247: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:12.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-132-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:20.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2127" for this suite. STEP: Destroying namespace "webhook-2127-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.657 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":28,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:16.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:16.987: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 22:01:21.992: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Apr 22 22:01:21.998: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Apr 22 22:01:22.003: INFO: observed ReplicaSet test-rs in namespace replicaset-6042 with ReadyReplicas 1, AvailableReplicas 1 Apr 22 22:01:22.014: INFO: observed ReplicaSet test-rs in namespace replicaset-6042 with ReadyReplicas 1, AvailableReplicas 1 Apr 22 22:01:22.026: INFO: observed ReplicaSet test-rs in namespace replicaset-6042 with ReadyReplicas 1, AvailableReplicas 1 Apr 22 22:01:22.030: INFO: observed ReplicaSet test-rs in namespace replicaset-6042 with ReadyReplicas 1, AvailableReplicas 1 Apr 22 22:01:25.271: INFO: observed ReplicaSet test-rs in namespace replicaset-6042 with ReadyReplicas 2, AvailableReplicas 2 Apr 22 22:01:26.246: INFO: observed Replicaset test-rs in namespace replicaset-6042 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:26.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6042" for this suite. • [SLOW TEST:9.299 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":11,"skipped":142,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:20.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-1c59aaf3-183c-4654-88a5-a208a23d8b42 STEP: Creating a pod to test consume configMaps Apr 22 22:01:20.414: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022" in namespace "projected-8648" to be "Succeeded or Failed" Apr 22 22:01:20.416: INFO: Pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461971ms Apr 22 22:01:22.420: INFO: Pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005825844s Apr 22 22:01:24.424: INFO: Pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009899964s Apr 22 22:01:26.427: INFO: Pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013544481s STEP: Saw pod success Apr 22 22:01:26.427: INFO: Pod "pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022" satisfied condition "Succeeded or Failed" Apr 22 22:01:26.429: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022 container agnhost-container: STEP: delete the pod Apr 22 22:01:26.441: INFO: Waiting for pod pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022 to disappear Apr 22 22:01:26.443: INFO: Pod pod-projected-configmaps-549c1bdc-a68a-4f3d-958e-0092568b7022 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:26.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8648" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":277,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:26.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-0775e781-1f8d-4ffd-b03b-52ae3a6ece23 STEP: Creating a pod to test consume configMaps Apr 22 22:01:26.506: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22" in namespace "projected-3017" to be "Succeeded or Failed" Apr 22 22:01:26.509: INFO: Pod "pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735648ms Apr 22 22:01:28.513: INFO: Pod "pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006651353s Apr 22 22:01:30.519: INFO: Pod "pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012400521s STEP: Saw pod success Apr 22 22:01:30.519: INFO: Pod "pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22" satisfied condition "Succeeded or Failed" Apr 22 22:01:30.522: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22 container projected-configmap-volume-test: STEP: delete the pod Apr 22 22:01:30.536: INFO: Waiting for pod pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22 to disappear Apr 22 22:01:30.539: INFO: Pod pod-projected-configmaps-10755395-5b6a-4d24-9d33-8cdc14d64a22 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3017" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":286,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:30.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 22 22:00:30.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 36558 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:00:30.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 36558 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 22 22:00:40.896: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 36814 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:00:40.897: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 36814 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 22 22:00:50.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 37141 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:00:50.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 37141 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 22 22:01:00.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 37349 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:01:00.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7093 90b8459c-32a4-49ee-8b53-ec70fb1a0591 37349 0 2022-04-22 22:00:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 22:00:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 22 22:01:10.922: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7093 af7a6b7f-c085-4ffe-97cc-edbc8fe85694 37547 0 2022-04-22 22:01:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 22:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:01:10.923: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7093 af7a6b7f-c085-4ffe-97cc-edbc8fe85694 37547 0 2022-04-22 22:01:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 22:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 22 22:01:20.930: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7093 af7a6b7f-c085-4ffe-97cc-edbc8fe85694 37727 0 2022-04-22 22:01:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 22:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:01:20.931: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7093 af7a6b7f-c085-4ffe-97cc-edbc8fe85694 37727 0 2022-04-22 22:01:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 22:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7093" for this suite. • [SLOW TEST:60.083 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":13,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:57:28.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-2874b158-98dd-4071-97b6-fddf2da70e8d in namespace container-probe-1556 Apr 22 21:57:35.042: INFO: Started pod test-webserver-2874b158-98dd-4071-97b6-fddf2da70e8d in namespace container-probe-1556 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:57:35.045: INFO: Initial restart count of pod test-webserver-2874b158-98dd-4071-97b6-fddf2da70e8d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:35.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1556" for this suite. • [SLOW TEST:246.753 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:09.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-2062 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 22:01:09.978: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 22:01:10.010: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:12.014: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:14.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:16.015: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:18.017: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:20.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:22.013: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:24.015: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:26.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:28.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:30.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:01:32.014: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 22:01:32.019: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 22:01:38.053: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 22 22:01:38.053: INFO: Going to poll 10.244.3.171 on port 8080 at least 0 times, with a maximum of 34 tries before failing Apr 22 22:01:38.055: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.171:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2062 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:38.055: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:38.142: INFO: Found all 1 expected endpoints: [netserver-0] Apr 22 22:01:38.142: INFO: Going to poll 10.244.4.76 on port 8080 at least 0 times, with a maximum of 34 tries before failing Apr 22 22:01:38.144: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.76:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2062 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:38.144: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:38.229: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:38.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2062" for this suite. • [SLOW TEST:28.282 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":233,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:35.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:35.637: INFO: Creating simple deployment test-new-deployment Apr 22 22:01:35.646: INFO: deployment "test-new-deployment" doesn't have the required revision set Apr 22 22:01:37.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:01:39.681: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-7735 28d9a06f-17a0-4edc-8e82-29ad89985dae 38137 3 2022-04-22 22:01:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-22 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00067c728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-22 22:01:38 +0000 UTC,LastTransitionTime:2022-04-22 22:01:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-04-22 22:01:38 +0000 UTC,LastTransitionTime:2022-04-22 22:01:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 22:01:39.684: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-7735 009ad1a1-a6c0-4487-95e3-64e287c82756 38140 3 2022-04-22 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 28d9a06f-17a0-4edc-8e82-29ad89985dae 0xc00067cb27 0xc00067cb28}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28d9a06f-17a0-4edc-8e82-29ad89985dae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00067cb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:01:39.687: INFO: Pod "test-new-deployment-847dcfb7fb-kcr25" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-kcr25 test-new-deployment-847dcfb7fb- deployment-7735 5c9baa5b-0fc4-40d1-b1d9-30519d3d86c4 38107 0 2022-04-22 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.173" ], "mac": "2a:c0:6a:4b:38:8d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.173" ], "mac": "2a:c0:6a:4b:38:8d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 009ad1a1-a6c0-4487-95e3-64e287c82756 0xc00067cf2f 0xc00067cf40}] [] [{kube-controller-manager Update v1 2022-04-22 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"009ad1a1-a6c0-4487-95e3-64e287c82756\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:01:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.173\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kj6zf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj6zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.173,StartTime:2022-04-22 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:01:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6e6116bef7ca220a5ffdd77f679e8655465dc84aa70f55b248b00566fff8c4e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:01:39.688: INFO: Pod "test-new-deployment-847dcfb7fb-z9rwq" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-z9rwq test-new-deployment-847dcfb7fb- deployment-7735 de002cff-9eca-47ed-9371-1f77ce1098ac 38143 0 2022-04-22 22:01:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 009ad1a1-a6c0-4487-95e3-64e287c82756 0xc00067d12f 0xc00067d140}] [] [{kube-controller-manager Update v1 2022-04-22 22:01:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"009ad1a1-a6c0-4487-95e3-64e287c82756\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9mqcw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mqcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:39.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7735" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:31.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-6843 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6843 STEP: Deleting pre-stop pod Apr 22 22:01:46.099: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6843" for this suite. • [SLOW TEST:15.082 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":14,"skipped":307,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:26.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Apr 22 22:01:26.301: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:49.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2027" for this suite. • [SLOW TEST:23.545 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":12,"skipped":151,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:49.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:49.860: INFO: Creating deployment "test-recreate-deployment" Apr 22 22:01:49.863: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 22 22:01:49.870: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 22 22:01:51.875: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 22 22:01:51.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261709, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261709, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261709, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261709, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:01:53.880: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 22 22:01:53.888: INFO: Updating deployment test-recreate-deployment Apr 22 22:01:53.888: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:01:53.929: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8176 95a3445d-53b1-4c1b-9bcd-c7d3a1bba9b3 38545 2 2022-04-22 22:01:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0058ead18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-22 22:01:53 +0000 UTC,LastTransitionTime:2022-04-22 22:01:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-04-22 22:01:53 +0000 UTC,LastTransitionTime:2022-04-22 22:01:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 22:01:53.932: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-8176 a6702ebf-8b1a-45e2-b841-c8e861c27b58 38543 1 2022-04-22 22:01:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 95a3445d-53b1-4c1b-9bcd-c7d3a1bba9b3 0xc0058eb320 0xc0058eb321}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"95a3445d-53b1-4c1b-9bcd-c7d3a1bba9b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0058eb3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:01:53.932: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 22 22:01:53.932: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-8176 c242b3b8-e09d-4f96-8ab2-c9a946a1f41e 38533 2 2022-04-22 22:01:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 95a3445d-53b1-4c1b-9bcd-c7d3a1bba9b3 0xc0058eb1f7 0xc0058eb1f8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"95a3445d-53b1-4c1b-9bcd-c7d3a1bba9b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0058eb298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:01:53.935: INFO: Pod "test-recreate-deployment-85d47dcb4-jmjfx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-jmjfx test-recreate-deployment-85d47dcb4- deployment-8176 2b9729d9-3d15-4663-ad5c-efa52368f9f2 38546 0 2022-04-22 22:01:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 a6702ebf-8b1a-45e2-b841-c8e861c27b58 0xc0058eba4f 0xc0058eba60}] [] [{kube-controller-manager Update v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6702ebf-8b1a-45e2-b841-c8e861c27b58\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:01:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-58g8v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58g8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-04-22 22:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:53.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8176" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":13,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:46.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Apr 22 22:01:46.157: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:48.161: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:50.162: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Apr 22 22:01:50.177: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:52.181: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:54.180: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 22 22:01:54.182: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.182: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.272: INFO: Exec stderr: "" Apr 22 22:01:54.272: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.272: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.361: INFO: Exec stderr: "" Apr 22 22:01:54.361: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.361: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.461: INFO: Exec stderr: "" Apr 22 22:01:54.461: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.461: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.550: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 22 22:01:54.550: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.550: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.628: INFO: Exec stderr: "" Apr 22 22:01:54.628: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.628: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.706: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 22 22:01:54.706: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.706: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.792: INFO: Exec stderr: "" Apr 22 22:01:54.792: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.792: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.876: INFO: Exec stderr: "" Apr 22 22:01:54.876: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.876: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:54.964: INFO: Exec stderr: "" Apr 22 22:01:54.964: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5748 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:01:54.964: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:01:55.068: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:55.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5748" for this suite. • [SLOW TEST:8.958 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:38.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Apr 22 22:01:38.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 create -f -' Apr 22 22:01:38.684: INFO: stderr: "" Apr 22 22:01:38.684: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:01:38.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:01:38.853: INFO: stderr: "" Apr 22 22:01:38.853: INFO: stdout: "update-demo-nautilus-gtfs2 update-demo-nautilus-zlhjw " Apr 22 22:01:38.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-gtfs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:01:39.013: INFO: stderr: "" Apr 22 22:01:39.013: INFO: stdout: "" Apr 22 22:01:39.013: INFO: update-demo-nautilus-gtfs2 is created but not running Apr 22 22:01:44.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:01:44.176: INFO: stderr: "" Apr 22 22:01:44.176: INFO: stdout: "update-demo-nautilus-gtfs2 update-demo-nautilus-zlhjw " Apr 22 22:01:44.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-gtfs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:01:44.338: INFO: stderr: "" Apr 22 22:01:44.338: INFO: stdout: "" Apr 22 22:01:44.338: INFO: update-demo-nautilus-gtfs2 is created but not running Apr 22 22:01:49.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:01:49.477: INFO: stderr: "" Apr 22 22:01:49.477: INFO: stdout: "update-demo-nautilus-gtfs2 update-demo-nautilus-zlhjw " Apr 22 22:01:49.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-gtfs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:01:49.623: INFO: stderr: "" Apr 22 22:01:49.623: INFO: stdout: "" Apr 22 22:01:49.623: INFO: update-demo-nautilus-gtfs2 is created but not running Apr 22 22:01:54.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:01:54.803: INFO: stderr: "" Apr 22 22:01:54.804: INFO: stdout: "update-demo-nautilus-gtfs2 update-demo-nautilus-zlhjw " Apr 22 22:01:54.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-gtfs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:01:54.972: INFO: stderr: "" Apr 22 22:01:54.972: INFO: stdout: "true" Apr 22 22:01:54.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-gtfs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:01:55.149: INFO: stderr: "" Apr 22 22:01:55.149: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:01:55.149: INFO: validating pod update-demo-nautilus-gtfs2 Apr 22 22:01:55.152: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:01:55.152: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:01:55.152: INFO: update-demo-nautilus-gtfs2 is verified up and running Apr 22 22:01:55.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-zlhjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:01:55.316: INFO: stderr: "" Apr 22 22:01:55.316: INFO: stdout: "true" Apr 22 22:01:55.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods update-demo-nautilus-zlhjw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:01:55.485: INFO: stderr: "" Apr 22 22:01:55.486: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:01:55.486: INFO: validating pod update-demo-nautilus-zlhjw Apr 22 22:01:55.489: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:01:55.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:01:55.489: INFO: update-demo-nautilus-zlhjw is verified up and running STEP: using delete to clean up resources Apr 22 22:01:55.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 delete --grace-period=0 --force -f -' Apr 22 22:01:55.623: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:01:55.623: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 22:01:55.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get rc,svc -l name=update-demo --no-headers' Apr 22 22:01:55.839: INFO: stderr: "No resources found in kubectl-6174 namespace.\n" Apr 22 22:01:55.839: INFO: stdout: "" Apr 22 22:01:55.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6174 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:01:56.000: INFO: stderr: "" Apr 22 22:01:56.000: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:56.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6174" for this suite. • [SLOW TEST:17.743 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":19,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:56.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-34c5a5e7-f78e-40b0-8e36-c308e48fdb74 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:01:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7384" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":20,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:54.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 22 22:01:54.106: INFO: Waiting up to 5m0s for pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c" in namespace "emptydir-3107" to be "Succeeded or Failed" Apr 22 22:01:54.108: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022638ms Apr 22 22:01:56.113: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007036709s Apr 22 22:01:58.116: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010237481s Apr 22 22:02:00.119: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012906562s Apr 22 22:02:02.122: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015721991s STEP: Saw pod success Apr 22 22:02:02.122: INFO: Pod "pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c" satisfied condition "Succeeded or Failed" Apr 22 22:02:02.124: INFO: Trying to get logs from node node2 pod pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c container test-container: STEP: delete the pod Apr 22 22:02:02.153: INFO: Waiting for pod pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c to disappear Apr 22 22:02:02.155: INFO: Pod pod-5e0a8ac3-6337-43a2-8843-4be11fb47d6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:02.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3107" for this suite. • [SLOW TEST:8.089 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:55.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 22 22:01:55.174: INFO: The status of Pod labelsupdatef7cf77b5-3bc3-4e43-9488-9adedce326ec is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:57.177: INFO: The status of Pod labelsupdatef7cf77b5-3bc3-4e43-9488-9adedce326ec is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:01:59.179: INFO: The status of Pod labelsupdatef7cf77b5-3bc3-4e43-9488-9adedce326ec is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:01.177: INFO: The status of Pod labelsupdatef7cf77b5-3bc3-4e43-9488-9adedce326ec is Running (Ready = true) Apr 22 22:02:01.694: INFO: Successfully updated pod "labelsupdatef7cf77b5-3bc3-4e43-9488-9adedce326ec" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:03.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1230" for this suite. • [SLOW TEST:8.757 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":338,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:03.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Apr 22 22:02:04.997: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 22:02:05.058: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:05.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6668" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":17,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:05.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Apr 22 22:02:05.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1002 cluster-info' Apr 22 22:02:05.309: INFO: stderr: "" Apr 22 22:02:05.309: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:05.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1002" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":18,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:02.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-07f9aade-65fa-4571-b27f-108aa2fdc6d9 STEP: Creating a pod to test consume configMaps Apr 22 22:02:02.388: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09" in namespace "configmap-337" to be "Succeeded or Failed" Apr 22 22:02:02.390: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237756ms Apr 22 22:02:04.394: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00559369s Apr 22 22:02:06.398: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009395832s Apr 22 22:02:08.402: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013808509s Apr 22 22:02:10.407: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01842582s STEP: Saw pod success Apr 22 22:02:10.407: INFO: Pod "pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09" satisfied condition "Succeeded or Failed" Apr 22 22:02:10.409: INFO: Trying to get logs from node node2 pod pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09 container agnhost-container: STEP: delete the pod Apr 22 22:02:10.424: INFO: Waiting for pod pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09 to disappear Apr 22 22:02:10.427: INFO: Pod pod-configmaps-dbf6b3b2-9b20-4d2d-860b-cdf74264fd09 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:10.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-337" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:10.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:10.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1689" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":16,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:56.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:01:56.341: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-791 I0422 22:01:56.361478 31 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-791, replica count: 1 I0422 22:01:57.413257 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:01:58.414926 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:01:59.417704 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:02:00.418153 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:02:01.419915 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:02:02.420549 31 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:02:02.527: INFO: Created: latency-svc-lrhm8 Apr 22 22:02:02.532: INFO: Got endpoints: latency-svc-lrhm8 [10.864311ms] Apr 22 22:02:02.538: INFO: Created: latency-svc-jn4hx Apr 22 22:02:02.541: INFO: Got endpoints: latency-svc-jn4hx [9.68532ms] Apr 22 22:02:02.543: INFO: Created: latency-svc-jg9fv Apr 22 22:02:02.544: INFO: Created: latency-svc-cjmqt Apr 22 22:02:02.545: INFO: Got endpoints: latency-svc-jg9fv [13.040511ms] Apr 22 22:02:02.549: INFO: Got endpoints: latency-svc-cjmqt [16.833371ms] Apr 22 22:02:02.552: INFO: Created: latency-svc-v2fzq Apr 22 22:02:02.554: INFO: Got endpoints: latency-svc-v2fzq [21.894266ms] Apr 22 22:02:02.556: INFO: Created: latency-svc-5gv2g Apr 22 22:02:02.558: INFO: Created: latency-svc-brqg9 Apr 22 22:02:02.558: INFO: Got endpoints: latency-svc-5gv2g [26.462932ms] Apr 22 22:02:02.561: INFO: Created: latency-svc-2trbs Apr 22 22:02:02.561: INFO: Got endpoints: latency-svc-brqg9 [28.620814ms] Apr 22 22:02:02.563: INFO: Got endpoints: latency-svc-2trbs [31.264529ms] Apr 22 22:02:02.564: INFO: Created: latency-svc-6m89l Apr 22 22:02:02.566: INFO: Created: latency-svc-b45lw Apr 22 22:02:02.567: INFO: Got endpoints: latency-svc-6m89l [34.425051ms] Apr 22 22:02:02.568: INFO: Got endpoints: latency-svc-b45lw [35.480668ms] Apr 22 22:02:02.569: INFO: Created: latency-svc-mhcmg Apr 22 22:02:02.572: INFO: Got endpoints: latency-svc-mhcmg [39.60552ms] Apr 22 22:02:02.573: INFO: Created: latency-svc-wrz2x Apr 22 22:02:02.575: INFO: Got endpoints: latency-svc-wrz2x [8.337374ms] Apr 22 22:02:02.575: INFO: Created: latency-svc-chsg9 Apr 22 22:02:02.578: INFO: Got endpoints: latency-svc-chsg9 [45.614351ms] Apr 22 22:02:02.578: INFO: Created: latency-svc-hv7mk Apr 22 22:02:02.581: INFO: Got endpoints: latency-svc-hv7mk [48.675418ms] Apr 22 22:02:02.581: INFO: Created: latency-svc-xxm8p Apr 22 22:02:02.583: INFO: Got endpoints: latency-svc-xxm8p [51.12178ms] Apr 22 22:02:02.583: INFO: Created: latency-svc-xqwp9 Apr 22 22:02:02.586: INFO: Got endpoints: latency-svc-xqwp9 [53.63817ms] Apr 22 22:02:02.586: INFO: Created: latency-svc-k2hvf Apr 22 22:02:02.589: INFO: Got endpoints: latency-svc-k2hvf [56.315629ms] Apr 22 22:02:02.589: INFO: Created: latency-svc-7gx2r Apr 22 22:02:02.591: INFO: Created: latency-svc-f5g9t Apr 22 22:02:02.591: INFO: Got endpoints: latency-svc-7gx2r [49.539543ms] Apr 22 22:02:02.593: INFO: Got endpoints: latency-svc-f5g9t [48.263194ms] Apr 22 22:02:02.594: INFO: Created: latency-svc-dkl7v Apr 22 22:02:02.597: INFO: Got endpoints: latency-svc-dkl7v [48.017643ms] Apr 22 22:02:02.597: INFO: Created: latency-svc-88hjv Apr 22 22:02:02.599: INFO: Created: latency-svc-kntv7 Apr 22 22:02:02.599: INFO: Got endpoints: latency-svc-88hjv [45.661175ms] Apr 22 22:02:02.601: INFO: Got endpoints: latency-svc-kntv7 [42.747007ms] Apr 22 22:02:02.602: INFO: Created: latency-svc-nxtl4 Apr 22 22:02:02.604: INFO: Got endpoints: latency-svc-nxtl4 [42.971888ms] Apr 22 22:02:02.605: INFO: Created: latency-svc-tg62t Apr 22 22:02:02.607: INFO: Got endpoints: latency-svc-tg62t [44.016795ms] Apr 22 22:02:02.607: INFO: Created: latency-svc-46d7t Apr 22 22:02:02.609: INFO: Got endpoints: latency-svc-46d7t [41.641862ms] Apr 22 22:02:02.610: INFO: Created: latency-svc-4xmsr Apr 22 22:02:02.613: INFO: Got endpoints: latency-svc-4xmsr [40.82189ms] Apr 22 22:02:02.614: INFO: Created: latency-svc-nckg6 Apr 22 22:02:02.616: INFO: Got endpoints: latency-svc-nckg6 [40.662468ms] Apr 22 22:02:02.616: INFO: Created: latency-svc-x2z2f Apr 22 22:02:02.619: INFO: Got endpoints: latency-svc-x2z2f [40.719339ms] Apr 22 22:02:02.620: INFO: Created: latency-svc-l755m Apr 22 22:02:02.622: INFO: Got endpoints: latency-svc-l755m [41.235593ms] Apr 22 22:02:02.623: INFO: Created: latency-svc-vkx7q Apr 22 22:02:02.625: INFO: Got endpoints: latency-svc-vkx7q [41.744159ms] Apr 22 22:02:02.625: INFO: Created: latency-svc-9c6lf Apr 22 22:02:02.628: INFO: Got endpoints: latency-svc-9c6lf [41.833973ms] Apr 22 22:02:02.628: INFO: Created: latency-svc-sqqc4 Apr 22 22:02:02.631: INFO: Got endpoints: latency-svc-sqqc4 [41.837788ms] Apr 22 22:02:02.632: INFO: Created: latency-svc-g6hjh Apr 22 22:02:02.633: INFO: Created: latency-svc-4qxrk Apr 22 22:02:02.636: INFO: Created: latency-svc-v7fz5 Apr 22 22:02:02.639: INFO: Created: latency-svc-rhrh9 Apr 22 22:02:02.641: INFO: Created: latency-svc-9qlx6 Apr 22 22:02:02.644: INFO: Created: latency-svc-j9hmx Apr 22 22:02:02.647: INFO: Created: latency-svc-b5jj9 Apr 22 22:02:02.649: INFO: Created: latency-svc-jq4qb Apr 22 22:02:02.652: INFO: Created: latency-svc-zjqfm Apr 22 22:02:02.655: INFO: Created: latency-svc-b5mp8 Apr 22 22:02:02.656: INFO: Created: latency-svc-rwpjt Apr 22 22:02:02.660: INFO: Created: latency-svc-fh22x Apr 22 22:02:02.664: INFO: Created: latency-svc-wc4k8 Apr 22 22:02:02.665: INFO: Created: latency-svc-4v8l9 Apr 22 22:02:02.668: INFO: Created: latency-svc-rwvxb Apr 22 22:02:02.680: INFO: Got endpoints: latency-svc-g6hjh [89.019998ms] Apr 22 22:02:02.686: INFO: Created: latency-svc-l7hjb Apr 22 22:02:02.731: INFO: Got endpoints: latency-svc-4qxrk [137.665048ms] Apr 22 22:02:02.736: INFO: Created: latency-svc-9nssl Apr 22 22:02:02.782: INFO: Got endpoints: latency-svc-v7fz5 [184.949339ms] Apr 22 22:02:02.787: INFO: Created: latency-svc-qvf9g Apr 22 22:02:02.831: INFO: Got endpoints: latency-svc-rhrh9 [231.126475ms] Apr 22 22:02:02.836: INFO: Created: latency-svc-dg9rt Apr 22 22:02:02.881: INFO: Got endpoints: latency-svc-9qlx6 [279.521265ms] Apr 22 22:02:02.886: INFO: Created: latency-svc-grm8g Apr 22 22:02:02.932: INFO: Got endpoints: latency-svc-j9hmx [327.776827ms] Apr 22 22:02:02.938: INFO: Created: latency-svc-bmzb9 Apr 22 22:02:02.981: INFO: Got endpoints: latency-svc-b5jj9 [373.340045ms] Apr 22 22:02:02.986: INFO: Created: latency-svc-2bthw Apr 22 22:02:03.031: INFO: Got endpoints: latency-svc-jq4qb [421.350318ms] Apr 22 22:02:03.039: INFO: Created: latency-svc-vss75 Apr 22 22:02:03.082: INFO: Got endpoints: latency-svc-zjqfm [469.171008ms] Apr 22 22:02:03.088: INFO: Created: latency-svc-7mrr9 Apr 22 22:02:03.131: INFO: Got endpoints: latency-svc-b5mp8 [514.868227ms] Apr 22 22:02:03.137: INFO: Created: latency-svc-298xg Apr 22 22:02:03.181: INFO: Got endpoints: latency-svc-rwpjt [562.617692ms] Apr 22 22:02:03.186: INFO: Created: latency-svc-4nmk5 Apr 22 22:02:03.232: INFO: Got endpoints: latency-svc-fh22x [609.840769ms] Apr 22 22:02:03.237: INFO: Created: latency-svc-5jnbf Apr 22 22:02:03.281: INFO: Got endpoints: latency-svc-wc4k8 [656.107072ms] Apr 22 22:02:03.287: INFO: Created: latency-svc-24j82 Apr 22 22:02:03.331: INFO: Got endpoints: latency-svc-4v8l9 [702.94295ms] Apr 22 22:02:03.335: INFO: Created: latency-svc-rpl5g Apr 22 22:02:03.381: INFO: Got endpoints: latency-svc-rwvxb [750.394626ms] Apr 22 22:02:03.388: INFO: Created: latency-svc-tdpmn Apr 22 22:02:03.431: INFO: Got endpoints: latency-svc-l7hjb [750.583244ms] Apr 22 22:02:03.437: INFO: Created: latency-svc-xz8tj Apr 22 22:02:03.480: INFO: Got endpoints: latency-svc-9nssl [748.660951ms] Apr 22 22:02:03.486: INFO: Created: latency-svc-2kb7h Apr 22 22:02:03.532: INFO: Got endpoints: latency-svc-qvf9g [750.509186ms] Apr 22 22:02:03.538: INFO: Created: latency-svc-mknl2 Apr 22 22:02:03.581: INFO: Got endpoints: latency-svc-dg9rt [750.266463ms] Apr 22 22:02:03.588: INFO: Created: latency-svc-lfwl8 Apr 22 22:02:03.630: INFO: Got endpoints: latency-svc-grm8g [749.597204ms] Apr 22 22:02:03.636: INFO: Created: latency-svc-qpfps Apr 22 22:02:03.681: INFO: Got endpoints: latency-svc-bmzb9 [749.510542ms] Apr 22 22:02:03.687: INFO: Created: latency-svc-5lw9x Apr 22 22:02:03.731: INFO: Got endpoints: latency-svc-2bthw [750.748288ms] Apr 22 22:02:03.737: INFO: Created: latency-svc-z48pv Apr 22 22:02:03.781: INFO: Got endpoints: latency-svc-vss75 [750.442277ms] Apr 22 22:02:03.786: INFO: Created: latency-svc-8745z Apr 22 22:02:03.831: INFO: Got endpoints: latency-svc-7mrr9 [749.341122ms] Apr 22 22:02:03.836: INFO: Created: latency-svc-bmxc9 Apr 22 22:02:03.881: INFO: Got endpoints: latency-svc-298xg [750.152145ms] Apr 22 22:02:03.887: INFO: Created: latency-svc-zlgpv Apr 22 22:02:03.931: INFO: Got endpoints: latency-svc-4nmk5 [749.93255ms] Apr 22 22:02:03.937: INFO: Created: latency-svc-jmxn8 Apr 22 22:02:03.981: INFO: Got endpoints: latency-svc-5jnbf [749.145142ms] Apr 22 22:02:03.988: INFO: Created: latency-svc-bk8db Apr 22 22:02:04.031: INFO: Got endpoints: latency-svc-24j82 [749.77642ms] Apr 22 22:02:04.037: INFO: Created: latency-svc-l4kxz Apr 22 22:02:04.080: INFO: Got endpoints: latency-svc-rpl5g [749.362832ms] Apr 22 22:02:04.085: INFO: Created: latency-svc-rjpjw Apr 22 22:02:04.131: INFO: Got endpoints: latency-svc-tdpmn [750.030167ms] Apr 22 22:02:04.136: INFO: Created: latency-svc-tg59f Apr 22 22:02:04.181: INFO: Got endpoints: latency-svc-xz8tj [749.686936ms] Apr 22 22:02:04.186: INFO: Created: latency-svc-vpfw4 Apr 22 22:02:04.231: INFO: Got endpoints: latency-svc-2kb7h [751.205077ms] Apr 22 22:02:04.236: INFO: Created: latency-svc-jftfv Apr 22 22:02:04.283: INFO: Got endpoints: latency-svc-mknl2 [750.99437ms] Apr 22 22:02:04.297: INFO: Created: latency-svc-n8cps Apr 22 22:02:04.331: INFO: Got endpoints: latency-svc-lfwl8 [750.315642ms] Apr 22 22:02:04.338: INFO: Created: latency-svc-xq25k Apr 22 22:02:04.382: INFO: Got endpoints: latency-svc-qpfps [751.242755ms] Apr 22 22:02:04.388: INFO: Created: latency-svc-sll4s Apr 22 22:02:04.431: INFO: Got endpoints: latency-svc-5lw9x [750.226652ms] Apr 22 22:02:04.437: INFO: Created: latency-svc-gdkq4 Apr 22 22:02:04.481: INFO: Got endpoints: latency-svc-z48pv [749.59929ms] Apr 22 22:02:04.487: INFO: Created: latency-svc-fjd9n Apr 22 22:02:04.531: INFO: Got endpoints: latency-svc-8745z [749.832405ms] Apr 22 22:02:04.537: INFO: Created: latency-svc-kxx9j Apr 22 22:02:04.581: INFO: Got endpoints: latency-svc-bmxc9 [749.869974ms] Apr 22 22:02:04.586: INFO: Created: latency-svc-nbv5h Apr 22 22:02:04.632: INFO: Got endpoints: latency-svc-zlgpv [750.800923ms] Apr 22 22:02:04.637: INFO: Created: latency-svc-q5dc8 Apr 22 22:02:04.681: INFO: Got endpoints: latency-svc-jmxn8 [749.66572ms] Apr 22 22:02:04.687: INFO: Created: latency-svc-frkjb Apr 22 22:02:04.731: INFO: Got endpoints: latency-svc-bk8db [749.730305ms] Apr 22 22:02:04.737: INFO: Created: latency-svc-sjnvm Apr 22 22:02:04.780: INFO: Got endpoints: latency-svc-l4kxz [749.43109ms] Apr 22 22:02:04.788: INFO: Created: latency-svc-wbwz6 Apr 22 22:02:04.831: INFO: Got endpoints: latency-svc-rjpjw [751.237694ms] Apr 22 22:02:04.837: INFO: Created: latency-svc-95mf7 Apr 22 22:02:04.881: INFO: Got endpoints: latency-svc-tg59f [749.446288ms] Apr 22 22:02:04.885: INFO: Created: latency-svc-zvh75 Apr 22 22:02:04.931: INFO: Got endpoints: latency-svc-vpfw4 [750.162231ms] Apr 22 22:02:04.936: INFO: Created: latency-svc-dbh99 Apr 22 22:02:04.981: INFO: Got endpoints: latency-svc-jftfv [749.715522ms] Apr 22 22:02:04.986: INFO: Created: latency-svc-rqmdb Apr 22 22:02:05.031: INFO: Got endpoints: latency-svc-n8cps [747.622153ms] Apr 22 22:02:05.037: INFO: Created: latency-svc-dm2nt Apr 22 22:02:05.081: INFO: Got endpoints: latency-svc-xq25k [749.588446ms] Apr 22 22:02:05.087: INFO: Created: latency-svc-gbvnx Apr 22 22:02:05.130: INFO: Got endpoints: latency-svc-sll4s [748.50149ms] Apr 22 22:02:05.135: INFO: Created: latency-svc-wk8ml Apr 22 22:02:05.181: INFO: Got endpoints: latency-svc-gdkq4 [749.337036ms] Apr 22 22:02:05.188: INFO: Created: latency-svc-xfz8p Apr 22 22:02:05.231: INFO: Got endpoints: latency-svc-fjd9n [749.785196ms] Apr 22 22:02:05.238: INFO: Created: latency-svc-xx555 Apr 22 22:02:05.281: INFO: Got endpoints: latency-svc-kxx9j [749.285804ms] Apr 22 22:02:05.286: INFO: Created: latency-svc-7svhh Apr 22 22:02:05.332: INFO: Got endpoints: latency-svc-nbv5h [750.804894ms] Apr 22 22:02:05.348: INFO: Created: latency-svc-49pzh Apr 22 22:02:05.380: INFO: Got endpoints: latency-svc-q5dc8 [748.760038ms] Apr 22 22:02:05.389: INFO: Created: latency-svc-zccd5 Apr 22 22:02:05.430: INFO: Got endpoints: latency-svc-frkjb [749.267829ms] Apr 22 22:02:05.435: INFO: Created: latency-svc-cxwkq Apr 22 22:02:05.480: INFO: Got endpoints: latency-svc-sjnvm [749.300556ms] Apr 22 22:02:05.487: INFO: Created: latency-svc-92qcd Apr 22 22:02:05.533: INFO: Got endpoints: latency-svc-wbwz6 [752.537114ms] Apr 22 22:02:05.539: INFO: Created: latency-svc-kzj6d Apr 22 22:02:05.580: INFO: Got endpoints: latency-svc-95mf7 [748.793804ms] Apr 22 22:02:05.585: INFO: Created: latency-svc-mxjf2 Apr 22 22:02:05.632: INFO: Got endpoints: latency-svc-zvh75 [751.090154ms] Apr 22 22:02:05.637: INFO: Created: latency-svc-768xp Apr 22 22:02:05.697: INFO: Got endpoints: latency-svc-dbh99 [766.013804ms] Apr 22 22:02:05.702: INFO: Created: latency-svc-pd7f4 Apr 22 22:02:05.730: INFO: Got endpoints: latency-svc-rqmdb [749.688471ms] Apr 22 22:02:05.736: INFO: Created: latency-svc-ksqhr Apr 22 22:02:05.781: INFO: Got endpoints: latency-svc-dm2nt [749.693292ms] Apr 22 22:02:05.786: INFO: Created: latency-svc-p8k8k Apr 22 22:02:05.832: INFO: Got endpoints: latency-svc-gbvnx [750.566452ms] Apr 22 22:02:05.838: INFO: Created: latency-svc-b7x7r Apr 22 22:02:05.881: INFO: Got endpoints: latency-svc-wk8ml [750.254738ms] Apr 22 22:02:05.886: INFO: Created: latency-svc-z2ljd Apr 22 22:02:05.930: INFO: Got endpoints: latency-svc-xfz8p [749.14749ms] Apr 22 22:02:05.936: INFO: Created: latency-svc-k2bb7 Apr 22 22:02:05.981: INFO: Got endpoints: latency-svc-xx555 [749.784653ms] Apr 22 22:02:05.987: INFO: Created: latency-svc-fkbfz Apr 22 22:02:06.030: INFO: Got endpoints: latency-svc-7svhh [748.944764ms] Apr 22 22:02:06.034: INFO: Created: latency-svc-mlnx6 Apr 22 22:02:06.081: INFO: Got endpoints: latency-svc-49pzh [748.73569ms] Apr 22 22:02:06.086: INFO: Created: latency-svc-bqqth Apr 22 22:02:06.131: INFO: Got endpoints: latency-svc-zccd5 [750.697258ms] Apr 22 22:02:06.137: INFO: Created: latency-svc-jbzbj Apr 22 22:02:06.181: INFO: Got endpoints: latency-svc-cxwkq [750.827027ms] Apr 22 22:02:06.186: INFO: Created: latency-svc-j7g5d Apr 22 22:02:06.280: INFO: Got endpoints: latency-svc-92qcd [799.784299ms] Apr 22 22:02:06.285: INFO: Created: latency-svc-9rxjf Apr 22 22:02:06.331: INFO: Got endpoints: latency-svc-kzj6d [798.316839ms] Apr 22 22:02:06.338: INFO: Created: latency-svc-969kh Apr 22 22:02:06.381: INFO: Got endpoints: latency-svc-mxjf2 [801.22148ms] Apr 22 22:02:06.386: INFO: Created: latency-svc-8wjlx Apr 22 22:02:06.430: INFO: Got endpoints: latency-svc-768xp [798.713519ms] Apr 22 22:02:06.437: INFO: Created: latency-svc-5xjvt Apr 22 22:02:06.482: INFO: Got endpoints: latency-svc-pd7f4 [784.731455ms] Apr 22 22:02:06.488: INFO: Created: latency-svc-fzzs5 Apr 22 22:02:06.531: INFO: Got endpoints: latency-svc-ksqhr [800.378032ms] Apr 22 22:02:06.536: INFO: Created: latency-svc-zcktc Apr 22 22:02:06.580: INFO: Got endpoints: latency-svc-p8k8k [799.48306ms] Apr 22 22:02:06.587: INFO: Created: latency-svc-nl6wf Apr 22 22:02:06.631: INFO: Got endpoints: latency-svc-b7x7r [799.754618ms] Apr 22 22:02:06.638: INFO: Created: latency-svc-mtq64 Apr 22 22:02:06.681: INFO: Got endpoints: latency-svc-z2ljd [799.939091ms] Apr 22 22:02:06.685: INFO: Created: latency-svc-p7lwj Apr 22 22:02:06.731: INFO: Got endpoints: latency-svc-k2bb7 [800.769425ms] Apr 22 22:02:06.736: INFO: Created: latency-svc-vjxlf Apr 22 22:02:06.780: INFO: Got endpoints: latency-svc-fkbfz [799.476183ms] Apr 22 22:02:06.786: INFO: Created: latency-svc-tv4w2 Apr 22 22:02:06.831: INFO: Got endpoints: latency-svc-mlnx6 [801.502611ms] Apr 22 22:02:06.836: INFO: Created: latency-svc-94mdg Apr 22 22:02:06.881: INFO: Got endpoints: latency-svc-bqqth [800.604651ms] Apr 22 22:02:06.886: INFO: Created: latency-svc-p4d67 Apr 22 22:02:06.932: INFO: Got endpoints: latency-svc-jbzbj [800.627461ms] Apr 22 22:02:06.938: INFO: Created: latency-svc-l9sr5 Apr 22 22:02:06.980: INFO: Got endpoints: latency-svc-j7g5d [799.147336ms] Apr 22 22:02:06.985: INFO: Created: latency-svc-7wmmq Apr 22 22:02:07.031: INFO: Got endpoints: latency-svc-9rxjf [751.127509ms] Apr 22 22:02:07.036: INFO: Created: latency-svc-t8776 Apr 22 22:02:07.083: INFO: Got endpoints: latency-svc-969kh [751.344917ms] Apr 22 22:02:07.089: INFO: Created: latency-svc-8s88w Apr 22 22:02:07.131: INFO: Got endpoints: latency-svc-8wjlx [749.4071ms] Apr 22 22:02:07.137: INFO: Created: latency-svc-fzt77 Apr 22 22:02:07.181: INFO: Got endpoints: latency-svc-5xjvt [750.65036ms] Apr 22 22:02:07.187: INFO: Created: latency-svc-p8nmd Apr 22 22:02:07.230: INFO: Got endpoints: latency-svc-fzzs5 [748.541492ms] Apr 22 22:02:07.236: INFO: Created: latency-svc-582w9 Apr 22 22:02:07.281: INFO: Got endpoints: latency-svc-zcktc [749.977502ms] Apr 22 22:02:07.286: INFO: Created: latency-svc-6td45 Apr 22 22:02:07.330: INFO: Got endpoints: latency-svc-nl6wf [749.756893ms] Apr 22 22:02:07.335: INFO: Created: latency-svc-q7t4q Apr 22 22:02:07.381: INFO: Got endpoints: latency-svc-mtq64 [749.518091ms] Apr 22 22:02:07.388: INFO: Created: latency-svc-dpvbg Apr 22 22:02:07.431: INFO: Got endpoints: latency-svc-p7lwj [750.300308ms] Apr 22 22:02:07.436: INFO: Created: latency-svc-smsc8 Apr 22 22:02:07.480: INFO: Got endpoints: latency-svc-vjxlf [749.495874ms] Apr 22 22:02:07.486: INFO: Created: latency-svc-m64bp Apr 22 22:02:07.532: INFO: Got endpoints: latency-svc-tv4w2 [751.583987ms] Apr 22 22:02:07.538: INFO: Created: latency-svc-gsnl8 Apr 22 22:02:07.581: INFO: Got endpoints: latency-svc-94mdg [749.480739ms] Apr 22 22:02:07.586: INFO: Created: latency-svc-dc6ms Apr 22 22:02:07.631: INFO: Got endpoints: latency-svc-p4d67 [749.944457ms] Apr 22 22:02:07.637: INFO: Created: latency-svc-77n95 Apr 22 22:02:07.681: INFO: Got endpoints: latency-svc-l9sr5 [749.435163ms] Apr 22 22:02:07.687: INFO: Created: latency-svc-j544c Apr 22 22:02:07.731: INFO: Got endpoints: latency-svc-7wmmq [750.972167ms] Apr 22 22:02:07.737: INFO: Created: latency-svc-2s29v Apr 22 22:02:07.781: INFO: Got endpoints: latency-svc-t8776 [749.377169ms] Apr 22 22:02:07.786: INFO: Created: latency-svc-h2cvg Apr 22 22:02:07.831: INFO: Got endpoints: latency-svc-8s88w [748.335941ms] Apr 22 22:02:07.838: INFO: Created: latency-svc-vf5nk Apr 22 22:02:07.880: INFO: Got endpoints: latency-svc-fzt77 [749.516148ms] Apr 22 22:02:07.885: INFO: Created: latency-svc-j964w Apr 22 22:02:07.932: INFO: Got endpoints: latency-svc-p8nmd [750.401767ms] Apr 22 22:02:07.937: INFO: Created: latency-svc-sgzfv Apr 22 22:02:07.981: INFO: Got endpoints: latency-svc-582w9 [750.730469ms] Apr 22 22:02:07.987: INFO: Created: latency-svc-z4xc4 Apr 22 22:02:08.030: INFO: Got endpoints: latency-svc-6td45 [749.714142ms] Apr 22 22:02:08.035: INFO: Created: latency-svc-xtj2h Apr 22 22:02:08.081: INFO: Got endpoints: latency-svc-q7t4q [750.703935ms] Apr 22 22:02:08.087: INFO: Created: latency-svc-fn8qs Apr 22 22:02:08.132: INFO: Got endpoints: latency-svc-dpvbg [750.616739ms] Apr 22 22:02:08.137: INFO: Created: latency-svc-5q4sr Apr 22 22:02:08.180: INFO: Got endpoints: latency-svc-smsc8 [748.882519ms] Apr 22 22:02:08.186: INFO: Created: latency-svc-bclj7 Apr 22 22:02:08.231: INFO: Got endpoints: latency-svc-m64bp [751.012311ms] Apr 22 22:02:08.237: INFO: Created: latency-svc-8cvrp Apr 22 22:02:08.280: INFO: Got endpoints: latency-svc-gsnl8 [748.342193ms] Apr 22 22:02:08.286: INFO: Created: latency-svc-fnpfd Apr 22 22:02:08.330: INFO: Got endpoints: latency-svc-dc6ms [749.795237ms] Apr 22 22:02:08.336: INFO: Created: latency-svc-55sfx Apr 22 22:02:08.381: INFO: Got endpoints: latency-svc-77n95 [750.09561ms] Apr 22 22:02:08.389: INFO: Created: latency-svc-tfqjj Apr 22 22:02:08.431: INFO: Got endpoints: latency-svc-j544c [749.70669ms] Apr 22 22:02:08.437: INFO: Created: latency-svc-kzqt9 Apr 22 22:02:08.480: INFO: Got endpoints: latency-svc-2s29v [748.925688ms] Apr 22 22:02:08.486: INFO: Created: latency-svc-vm4s2 Apr 22 22:02:08.531: INFO: Got endpoints: latency-svc-h2cvg [750.080521ms] Apr 22 22:02:08.537: INFO: Created: latency-svc-ppgkk Apr 22 22:02:08.581: INFO: Got endpoints: latency-svc-vf5nk [750.092168ms] Apr 22 22:02:08.589: INFO: Created: latency-svc-hrnhx Apr 22 22:02:08.631: INFO: Got endpoints: latency-svc-j964w [750.73848ms] Apr 22 22:02:08.636: INFO: Created: latency-svc-fmks7 Apr 22 22:02:08.681: INFO: Got endpoints: latency-svc-sgzfv [749.427067ms] Apr 22 22:02:08.687: INFO: Created: latency-svc-2lbl6 Apr 22 22:02:08.731: INFO: Got endpoints: latency-svc-z4xc4 [750.324564ms] Apr 22 22:02:08.738: INFO: Created: latency-svc-5t65k Apr 22 22:02:08.781: INFO: Got endpoints: latency-svc-xtj2h [750.121156ms] Apr 22 22:02:08.786: INFO: Created: latency-svc-cffrc Apr 22 22:02:08.831: INFO: Got endpoints: latency-svc-fn8qs [749.736492ms] Apr 22 22:02:08.837: INFO: Created: latency-svc-7px46 Apr 22 22:02:08.881: INFO: Got endpoints: latency-svc-5q4sr [749.418745ms] Apr 22 22:02:08.886: INFO: Created: latency-svc-bswl6 Apr 22 22:02:08.931: INFO: Got endpoints: latency-svc-bclj7 [750.69802ms] Apr 22 22:02:08.936: INFO: Created: latency-svc-f9sq9 Apr 22 22:02:08.980: INFO: Got endpoints: latency-svc-8cvrp [748.452013ms] Apr 22 22:02:08.986: INFO: Created: latency-svc-vxpzl Apr 22 22:02:09.030: INFO: Got endpoints: latency-svc-fnpfd [749.844555ms] Apr 22 22:02:09.036: INFO: Created: latency-svc-zz9sb Apr 22 22:02:09.080: INFO: Got endpoints: latency-svc-55sfx [749.984248ms] Apr 22 22:02:09.086: INFO: Created: latency-svc-4h6nw Apr 22 22:02:09.130: INFO: Got endpoints: latency-svc-tfqjj [748.730697ms] Apr 22 22:02:09.136: INFO: Created: latency-svc-fcczn Apr 22 22:02:09.181: INFO: Got endpoints: latency-svc-kzqt9 [749.768158ms] Apr 22 22:02:09.187: INFO: Created: latency-svc-p95mz Apr 22 22:02:09.231: INFO: Got endpoints: latency-svc-vm4s2 [750.96679ms] Apr 22 22:02:09.237: INFO: Created: latency-svc-6ctd7 Apr 22 22:02:09.281: INFO: Got endpoints: latency-svc-ppgkk [749.939397ms] Apr 22 22:02:09.287: INFO: Created: latency-svc-m6mng Apr 22 22:02:09.331: INFO: Got endpoints: latency-svc-hrnhx [749.732917ms] Apr 22 22:02:09.337: INFO: Created: latency-svc-xzqnf Apr 22 22:02:09.381: INFO: Got endpoints: latency-svc-fmks7 [749.935718ms] Apr 22 22:02:09.386: INFO: Created: latency-svc-zr2hg Apr 22 22:02:09.431: INFO: Got endpoints: latency-svc-2lbl6 [750.216116ms] Apr 22 22:02:09.436: INFO: Created: latency-svc-kv29r Apr 22 22:02:09.481: INFO: Got endpoints: latency-svc-5t65k [749.521503ms] Apr 22 22:02:09.487: INFO: Created: latency-svc-pqc7v Apr 22 22:02:09.531: INFO: Got endpoints: latency-svc-cffrc [750.180255ms] Apr 22 22:02:09.536: INFO: Created: latency-svc-t7mzz Apr 22 22:02:09.581: INFO: Got endpoints: latency-svc-7px46 [750.034938ms] Apr 22 22:02:09.586: INFO: Created: latency-svc-8fk96 Apr 22 22:02:09.631: INFO: Got endpoints: latency-svc-bswl6 [749.838423ms] Apr 22 22:02:09.636: INFO: Created: latency-svc-r86z9 Apr 22 22:02:09.681: INFO: Got endpoints: latency-svc-f9sq9 [750.682294ms] Apr 22 22:02:09.688: INFO: Created: latency-svc-k4j2n Apr 22 22:02:09.731: INFO: Got endpoints: latency-svc-vxpzl [751.248286ms] Apr 22 22:02:09.737: INFO: Created: latency-svc-c5fk4 Apr 22 22:02:09.780: INFO: Got endpoints: latency-svc-zz9sb [750.18298ms] Apr 22 22:02:09.788: INFO: Created: latency-svc-6t8qv Apr 22 22:02:09.831: INFO: Got endpoints: latency-svc-4h6nw [750.226435ms] Apr 22 22:02:09.836: INFO: Created: latency-svc-qk4kg Apr 22 22:02:09.880: INFO: Got endpoints: latency-svc-fcczn [750.00764ms] Apr 22 22:02:09.886: INFO: Created: latency-svc-p5bf8 Apr 22 22:02:09.931: INFO: Got endpoints: latency-svc-p95mz [749.92107ms] Apr 22 22:02:09.938: INFO: Created: latency-svc-xbx9x Apr 22 22:02:09.981: INFO: Got endpoints: latency-svc-6ctd7 [749.225544ms] Apr 22 22:02:09.986: INFO: Created: latency-svc-nkjz2 Apr 22 22:02:10.032: INFO: Got endpoints: latency-svc-m6mng [750.853915ms] Apr 22 22:02:10.037: INFO: Created: latency-svc-m2wtc Apr 22 22:02:10.081: INFO: Got endpoints: latency-svc-xzqnf [750.045358ms] Apr 22 22:02:10.087: INFO: Created: latency-svc-92ldq Apr 22 22:02:10.131: INFO: Got endpoints: latency-svc-zr2hg [749.764888ms] Apr 22 22:02:10.137: INFO: Created: latency-svc-lfxfq Apr 22 22:02:10.180: INFO: Got endpoints: latency-svc-kv29r [748.733236ms] Apr 22 22:02:10.186: INFO: Created: latency-svc-8x6vb Apr 22 22:02:10.230: INFO: Got endpoints: latency-svc-pqc7v [749.052851ms] Apr 22 22:02:10.237: INFO: Created: latency-svc-d5csz Apr 22 22:02:10.281: INFO: Got endpoints: latency-svc-t7mzz [749.601906ms] Apr 22 22:02:10.286: INFO: Created: latency-svc-nbrk9 Apr 22 22:02:10.331: INFO: Got endpoints: latency-svc-8fk96 [749.720187ms] Apr 22 22:02:10.336: INFO: Created: latency-svc-k4zgr Apr 22 22:02:10.381: INFO: Got endpoints: latency-svc-r86z9 [750.057754ms] Apr 22 22:02:10.387: INFO: Created: latency-svc-pfrdt Apr 22 22:02:10.431: INFO: Got endpoints: latency-svc-k4j2n [749.652615ms] Apr 22 22:02:10.480: INFO: Got endpoints: latency-svc-c5fk4 [749.086436ms] Apr 22 22:02:10.531: INFO: Got endpoints: latency-svc-6t8qv [750.4499ms] Apr 22 22:02:10.580: INFO: Got endpoints: latency-svc-qk4kg [749.07121ms] Apr 22 22:02:10.631: INFO: Got endpoints: latency-svc-p5bf8 [750.237872ms] Apr 22 22:02:10.680: INFO: Got endpoints: latency-svc-xbx9x [749.295226ms] Apr 22 22:02:10.731: INFO: Got endpoints: latency-svc-nkjz2 [750.733271ms] Apr 22 22:02:10.780: INFO: Got endpoints: latency-svc-m2wtc [748.429203ms] Apr 22 22:02:10.832: INFO: Got endpoints: latency-svc-92ldq [750.250745ms] Apr 22 22:02:10.881: INFO: Got endpoints: latency-svc-lfxfq [750.29053ms] Apr 22 22:02:10.932: INFO: Got endpoints: latency-svc-8x6vb [751.852074ms] Apr 22 22:02:10.981: INFO: Got endpoints: latency-svc-d5csz [751.286689ms] Apr 22 22:02:11.031: INFO: Got endpoints: latency-svc-nbrk9 [750.718033ms] Apr 22 22:02:11.082: INFO: Got endpoints: latency-svc-k4zgr [751.24423ms] Apr 22 22:02:11.131: INFO: Got endpoints: latency-svc-pfrdt [750.143724ms] Apr 22 22:02:11.131: INFO: Latencies: [8.337374ms 9.68532ms 13.040511ms 16.833371ms 21.894266ms 26.462932ms 28.620814ms 31.264529ms 34.425051ms 35.480668ms 39.60552ms 40.662468ms 40.719339ms 40.82189ms 41.235593ms 41.641862ms 41.744159ms 41.833973ms 41.837788ms 42.747007ms 42.971888ms 44.016795ms 45.614351ms 45.661175ms 48.017643ms 48.263194ms 48.675418ms 49.539543ms 51.12178ms 53.63817ms 56.315629ms 89.019998ms 137.665048ms 184.949339ms 231.126475ms 279.521265ms 327.776827ms 373.340045ms 421.350318ms 469.171008ms 514.868227ms 562.617692ms 609.840769ms 656.107072ms 702.94295ms 747.622153ms 748.335941ms 748.342193ms 748.429203ms 748.452013ms 748.50149ms 748.541492ms 748.660951ms 748.730697ms 748.733236ms 748.73569ms 748.760038ms 748.793804ms 748.882519ms 748.925688ms 748.944764ms 749.052851ms 749.07121ms 749.086436ms 749.145142ms 749.14749ms 749.225544ms 749.267829ms 749.285804ms 749.295226ms 749.300556ms 749.337036ms 749.341122ms 749.362832ms 749.377169ms 749.4071ms 749.418745ms 749.427067ms 749.43109ms 749.435163ms 749.446288ms 749.480739ms 749.495874ms 749.510542ms 749.516148ms 749.518091ms 749.521503ms 749.588446ms 749.597204ms 749.59929ms 749.601906ms 749.652615ms 749.66572ms 749.686936ms 749.688471ms 749.693292ms 749.70669ms 749.714142ms 749.715522ms 749.720187ms 749.730305ms 749.732917ms 749.736492ms 749.756893ms 749.764888ms 749.768158ms 749.77642ms 749.784653ms 749.785196ms 749.795237ms 749.832405ms 749.838423ms 749.844555ms 749.869974ms 749.92107ms 749.93255ms 749.935718ms 749.939397ms 749.944457ms 749.977502ms 749.984248ms 750.00764ms 750.030167ms 750.034938ms 750.045358ms 750.057754ms 750.080521ms 750.092168ms 750.09561ms 750.121156ms 750.143724ms 750.152145ms 750.162231ms 750.180255ms 750.18298ms 750.216116ms 750.226435ms 750.226652ms 750.237872ms 750.250745ms 750.254738ms 750.266463ms 750.29053ms 750.300308ms 750.315642ms 750.324564ms 750.394626ms 750.401767ms 750.442277ms 750.4499ms 750.509186ms 750.566452ms 750.583244ms 750.616739ms 750.65036ms 750.682294ms 750.697258ms 750.69802ms 750.703935ms 750.718033ms 750.730469ms 750.733271ms 750.73848ms 750.748288ms 750.800923ms 750.804894ms 750.827027ms 750.853915ms 750.96679ms 750.972167ms 750.99437ms 751.012311ms 751.090154ms 751.127509ms 751.205077ms 751.237694ms 751.242755ms 751.24423ms 751.248286ms 751.286689ms 751.344917ms 751.583987ms 751.852074ms 752.537114ms 766.013804ms 784.731455ms 798.316839ms 798.713519ms 799.147336ms 799.476183ms 799.48306ms 799.754618ms 799.784299ms 799.939091ms 800.378032ms 800.604651ms 800.627461ms 800.769425ms 801.22148ms 801.502611ms] Apr 22 22:02:11.132: INFO: 50 %ile: 749.730305ms Apr 22 22:02:11.132: INFO: 90 %ile: 751.344917ms Apr 22 22:02:11.132: INFO: 99 %ile: 801.22148ms Apr 22 22:02:11.132: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:11.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-791" for this suite. • [SLOW TEST:14.824 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":21,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:11.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Apr 22 22:02:11.235: INFO: created test-podtemplate-1 Apr 22 22:02:11.238: INFO: created test-podtemplate-2 Apr 22 22:02:11.241: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Apr 22 22:02:11.244: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Apr 22 22:02:11.252: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:11.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4645" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":22,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:19.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5036 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5036 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5036 Apr 22 22:01:19.779: INFO: Found 0 stateful pods, waiting for 1 Apr 22 22:01:29.784: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 22 22:01:29.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:01:30.050: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:01:30.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:01:30.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:01:30.053: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 22 22:01:40.056: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:01:40.056: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:01:40.067: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:01:40.067: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:01:40.067: INFO: Apr 22 22:01:40.067: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 22 22:01:41.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99680201s Apr 22 22:01:42.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992791799s Apr 22 22:01:43.079: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989344808s Apr 22 22:01:44.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983932377s Apr 22 22:01:45.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978521838s Apr 22 22:01:46.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97417092s Apr 22 22:01:47.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970442887s Apr 22 22:01:48.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966300304s Apr 22 22:01:49.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.774014ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5036 Apr 22 22:01:50.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:01:50.367: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:01:50.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:01:50.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:01:50.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:01:50.652: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 22 22:01:50.652: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:01:50.652: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:01:50.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:01:50.894: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 22 22:01:50.894: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:01:50.894: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:01:50.898: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:01:50.898: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:01:50.898: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 22 22:01:50.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:01:51.382: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:01:51.382: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:01:51.382: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:01:51.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:01:51.628: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:01:51.628: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:01:51.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:01:51.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5036 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:01:51.890: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:01:51.890: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:01:51.890: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:01:51.890: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:01:51.893: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 22 22:02:01.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:02:01.903: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:02:01.903: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:02:01.913: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:01.913: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:02:01.913: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:01.913: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:01.913: INFO: Apr 22 22:02:01.913: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 22:02:02.917: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:02.917: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:02:02.917: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:02.917: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:02.917: INFO: Apr 22 22:02:02.917: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 22:02:03.921: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:03.921: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:02:03.921: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:03.921: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:03.921: INFO: Apr 22 22:02:03.921: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 22:02:04.924: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:04.924: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:02:04.924: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:04.924: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:04.924: INFO: Apr 22 22:02:04.924: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 22:02:05.928: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:05.928: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:19 +0000 UTC }] Apr 22 22:02:05.929: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:05.929: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:05.929: INFO: Apr 22 22:02:05.929: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 22:02:06.933: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:02:06.933: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:06.933: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:01:40 +0000 UTC }] Apr 22 22:02:06.933: INFO: Apr 22 22:02:06.933: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 22 22:02:07.936: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.976698894s Apr 22 22:02:08.939: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.973517416s Apr 22 22:02:09.942: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.970340181s Apr 22 22:02:10.945: INFO: Verifying statefulset ss doesn't scale past 0 for another 966.984776ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5036 Apr 22 22:02:11.949: INFO: Scaling statefulset ss to 0 Apr 22 22:02:11.959: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:02:11.961: INFO: Deleting all statefulset in ns statefulset-5036 Apr 22 22:02:11.964: INFO: Scaling statefulset ss to 0 Apr 22 22:02:11.971: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:02:11.973: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:11.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5036" for this suite. • [SLOW TEST:52.241 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:39.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:12.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9801" for this suite. • [SLOW TEST:32.331 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:11.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:15.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6962" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":448,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:05.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 22 22:02:05.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1608 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Apr 22 22:02:05.556: INFO: stderr: "" Apr 22 22:02:05.556: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Apr 22 22:02:05.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1608 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Apr 22 22:02:06.005: INFO: stderr: "" Apr 22 22:02:06.005: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 22 22:02:06.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1608 delete pods e2e-test-httpd-pod' Apr 22 22:02:17.906: INFO: stderr: "" Apr 22 22:02:17.906: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:17.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1608" for this suite. • [SLOW TEST:12.542 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":19,"skipped":393,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:12.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 22 22:02:12.112: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 22 22:02:12.115: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 22 22:02:12.115: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 22 22:02:12.126: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 22 22:02:12.126: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 22 22:02:12.139: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 22 22:02:12.139: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 22 22:02:19.186: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:19.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6394" for this suite. • [SLOW TEST:7.117 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:10.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:02:10.781: INFO: The status of Pod server-envvars-9fb948ef-32a9-4588-b7bb-51fb90de92ef is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:12.784: INFO: The status of Pod server-envvars-9fb948ef-32a9-4588-b7bb-51fb90de92ef is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:14.784: INFO: The status of Pod server-envvars-9fb948ef-32a9-4588-b7bb-51fb90de92ef is Running (Ready = true) Apr 22 22:02:14.803: INFO: Waiting up to 5m0s for pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e" in namespace "pods-6339" to be "Succeeded or Failed" Apr 22 22:02:14.805: INFO: Pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247319ms Apr 22 22:02:16.808: INFO: Pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005376519s Apr 22 22:02:18.812: INFO: Pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008784065s Apr 22 22:02:20.814: INFO: Pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011195214s STEP: Saw pod success Apr 22 22:02:20.814: INFO: Pod "client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e" satisfied condition "Succeeded or Failed" Apr 22 22:02:20.817: INFO: Trying to get logs from node node1 pod client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e container env3cont: STEP: delete the pod Apr 22 22:02:20.827: INFO: Waiting for pod client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e to disappear Apr 22 22:02:20.830: INFO: Pod client-envvars-2de10523-9b4a-43f7-bc00-0df92855903e no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:20.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6339" for this suite. • [SLOW TEST:10.090 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":427,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:17.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-2d8a70a4-5b77-4036-b411-d9273d8e8204 STEP: Creating a pod to test consume configMaps Apr 22 22:02:17.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03" in namespace "configmap-7255" to be "Succeeded or Failed" Apr 22 22:02:17.974: INFO: Pod "pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03": Phase="Pending", Reason="", readiness=false. Elapsed: 1.836577ms Apr 22 22:02:19.979: INFO: Pod "pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006638064s Apr 22 22:02:21.982: INFO: Pod "pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010363359s STEP: Saw pod success Apr 22 22:02:21.982: INFO: Pod "pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03" satisfied condition "Succeeded or Failed" Apr 22 22:02:21.985: INFO: Trying to get logs from node node2 pod pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03 container agnhost-container: STEP: delete the pod Apr 22 22:02:22.010: INFO: Waiting for pod pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03 to disappear Apr 22 22:02:22.012: INFO: Pod pod-configmaps-5e00fc14-e27f-41a8-8d8b-9bd8e3638f03 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:22.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7255" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:20.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-3e101295-70f2-4b47-a290-9ca30e25f880 in namespace container-probe-7531 Apr 22 21:58:26.696: INFO: Started pod liveness-3e101295-70f2-4b47-a290-9ca30e25f880 in namespace container-probe-7531 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 21:58:26.699: INFO: Initial restart count of pod liveness-3e101295-70f2-4b47-a290-9ca30e25f880 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:27.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7531" for this suite. • [SLOW TEST:246.592 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":119,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:15.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 22 22:02:15.452: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:27.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2396" for this suite. • [SLOW TEST:12.388 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-6604/secret-test-c4b4a23f-98dd-45d2-b83b-4cefcf7ce00e STEP: Creating a pod to test consume secrets Apr 22 22:02:22.111: INFO: Waiting up to 5m0s for pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7" in namespace "secrets-6604" to be "Succeeded or Failed" Apr 22 22:02:22.114: INFO: Pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.233391ms Apr 22 22:02:24.117: INFO: Pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005967706s Apr 22 22:02:26.120: INFO: Pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008727578s Apr 22 22:02:28.124: INFO: Pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013137635s STEP: Saw pod success Apr 22 22:02:28.124: INFO: Pod "pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7" satisfied condition "Succeeded or Failed" Apr 22 22:02:28.126: INFO: Trying to get logs from node node2 pod pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7 container env-test: STEP: delete the pod Apr 22 22:02:28.141: INFO: Waiting for pod pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7 to disappear Apr 22 22:02:28.144: INFO: Pod pod-configmaps-314e989b-a95b-4f35-838f-05ed3e1d1ed7 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6604" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":428,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:28.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:44.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9893" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":22,"skipped":435,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:27.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 22 22:02:27.303: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:29.306: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:31.310: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 22 22:02:31.325: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:33.332: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:35.331: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Apr 22 22:02:35.339: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:35.341: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:37.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:37.346: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:39.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:39.345: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:41.343: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:41.346: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:43.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:43.346: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:45.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:45.346: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:47.341: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:47.344: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 22:02:49.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 22:02:49.344: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:49.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9835" for this suite. • [SLOW TEST:22.096 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":126,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:49.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 22 22:02:49.404: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8307 c66fbb0d-9ca4-4d97-b870-8327ebc44f01 41494 0 2022-04-22 22:02:49 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-04-22 22:02:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ksrdx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksrdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:02:49.407: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:51.412: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:53.413: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 22 22:02:53.413: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8307 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:02:53.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Apr 22 22:02:53.515: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8307 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:02:53.515: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:02:53.605: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:53.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8307" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":8,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:53.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:02:53.750: INFO: The status of Pod busybox-host-aliases0876201c-1ef1-4713-acf6-76c8cfd4cca0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:55.754: INFO: The status of Pod busybox-host-aliases0876201c-1ef1-4713-acf6-76c8cfd4cca0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:02:57.754: INFO: The status of Pod busybox-host-aliases0876201c-1ef1-4713-acf6-76c8cfd4cca0 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:02:57.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-927" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":163,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:44.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:00.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9881" for this suite. • [SLOW TEST:16.109 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":23,"skipped":441,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:00.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-b11c30ef-0e49-42bb-a1cd-fa9b5b9e15be STEP: Creating a pod to test consume secrets Apr 22 22:03:00.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438" in namespace "projected-6517" to be "Succeeded or Failed" Apr 22 22:03:00.449: INFO: Pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438": Phase="Pending", Reason="", readiness=false. Elapsed: 1.807217ms Apr 22 22:03:02.453: INFO: Pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005059363s Apr 22 22:03:04.456: INFO: Pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008817508s Apr 22 22:03:06.462: INFO: Pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014139389s STEP: Saw pod success Apr 22 22:03:06.462: INFO: Pod "pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438" satisfied condition "Succeeded or Failed" Apr 22 22:03:06.464: INFO: Trying to get logs from node node1 pod pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438 container projected-secret-volume-test: STEP: delete the pod Apr 22 22:03:06.475: INFO: Waiting for pod pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438 to disappear Apr 22 22:03:06.478: INFO: Pod pod-projected-secrets-5079ce71-79c8-4aab-88bd-38155a44e438 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:06.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6517" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":444,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:06.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 22:03:11.543: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:11.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2775" for this suite. • [SLOW TEST:5.072 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:11.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-d40aa444-1245-494b-a98f-84cb19d32bbb STEP: Creating a pod to test consume secrets Apr 22 22:03:11.648: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b" in namespace "projected-2885" to be "Succeeded or Failed" Apr 22 22:03:11.652: INFO: Pod "pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076179ms Apr 22 22:03:13.656: INFO: Pod "pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008286068s Apr 22 22:03:15.661: INFO: Pod "pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013414371s STEP: Saw pod success Apr 22 22:03:15.661: INFO: Pod "pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b" satisfied condition "Succeeded or Failed" Apr 22 22:03:15.664: INFO: Trying to get logs from node node2 pod pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b container projected-secret-volume-test: STEP: delete the pod Apr 22 22:03:15.679: INFO: Waiting for pod pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b to disappear Apr 22 22:03:15.683: INFO: Pod pod-projected-secrets-087623f3-ff39-4c82-aecb-e3fa656af62b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:15.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2885" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":468,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:15.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:03:15.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de" in namespace "downward-api-9497" to be "Succeeded or Failed" Apr 22 22:03:15.750: INFO: Pod "downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300123ms Apr 22 22:03:17.754: INFO: Pod "downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006863434s Apr 22 22:03:19.759: INFO: Pod "downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011299357s STEP: Saw pod success Apr 22 22:03:19.759: INFO: Pod "downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de" satisfied condition "Succeeded or Failed" Apr 22 22:03:19.761: INFO: Trying to get logs from node node2 pod downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de container client-container: STEP: delete the pod Apr 22 22:03:19.775: INFO: Waiting for pod downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de to disappear Apr 22 22:03:19.777: INFO: Pod downwardapi-volume-093ed19b-3552-4390-ac2d-45516771b8de no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:19.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9497" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:57.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:02:57.841: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 22 22:03:02.844: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 22:03:02.844: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 22 22:03:04.849: INFO: Creating deployment "test-rollover-deployment" Apr 22 22:03:04.857: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 22 22:03:06.862: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 22 22:03:06.868: INFO: Ensure that both replica sets have 1 created replica Apr 22 22:03:06.873: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 22 22:03:06.879: INFO: Updating deployment test-rollover-deployment Apr 22 22:03:06.879: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 22 22:03:08.885: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 22 22:03:08.890: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 22 22:03:08.895: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:08.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261786, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:10.902: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:10.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:12.903: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:12.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:14.903: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:14.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:16.904: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:16.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:18.904: INFO: all replica sets need to contain the pod-template-hash label Apr 22 22:03:18.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261784, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:03:20.902: INFO: Apr 22 22:03:20.902: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:03:20.909: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4796 9c3b9a72-629f-449e-8377-e5b73e9e6090 42003 2 2022-04-22 22:03:04 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-22 22:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002cf2b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-22 22:03:04 +0000 UTC,LastTransitionTime:2022-04-22 22:03:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-04-22 22:03:19 +0000 UTC,LastTransitionTime:2022-04-22 22:03:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 22:03:20.912: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-4796 35a2aeab-7659-425d-849f-f2f634c28663 41989 2 2022-04-22 22:03:06 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9c3b9a72-629f-449e-8377-e5b73e9e6090 0xc002cf30c0 0xc002cf30c1}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c3b9a72-629f-449e-8377-e5b73e9e6090\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002cf3138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:03:20.912: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 22 22:03:20.913: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4796 c357dc13-ef8e-4aa1-b26a-336b072edb77 42002 2 2022-04-22 22:02:57 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9c3b9a72-629f-449e-8377-e5b73e9e6090 0xc002cf2eb7 0xc002cf2eb8}] [] [{e2e.test Update apps/v1 2022-04-22 22:02:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c3b9a72-629f-449e-8377-e5b73e9e6090\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cf2f58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:03:20.913: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-4796 8266e72a-0acc-4464-bc7f-9870466adb08 41814 2 2022-04-22 22:03:04 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9c3b9a72-629f-449e-8377-e5b73e9e6090 0xc002cf2fc7 0xc002cf2fc8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c3b9a72-629f-449e-8377-e5b73e9e6090\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002cf3058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:03:20.916: INFO: Pod "test-rollover-deployment-98c5f4599-66rj4" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-66rj4 test-rollover-deployment-98c5f4599- deployment-4796 5471b23c-1887-4d0d-b41a-e36d4a2e4ac0 41863 0 2022-04-22 22:03:06 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "22:1a:66:b0:92:ca", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "22:1a:66:b0:92:ca", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 35a2aeab-7659-425d-849f-f2f634c28663 0xc002cf362f 0xc002cf3640}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35a2aeab-7659-425d-849f-f2f634c28663\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:03:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g4rnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4rnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.194,StartTime:2022-04-22 22:03:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:03:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://09c5cf4222a73e88ca5090cbd7db218e8e936458bb40c1a3da6263dae23670cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:20.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4796" for this suite. • [SLOW TEST:23.119 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":10,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:19.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Apr 22 22:03:20.003: INFO: Waiting up to 5m0s for pod "pod-efabb811-a8e7-48a4-a678-99b6fda62670" in namespace "emptydir-3992" to be "Succeeded or Failed" Apr 22 22:03:20.005: INFO: Pod "pod-efabb811-a8e7-48a4-a678-99b6fda62670": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317614ms Apr 22 22:03:22.008: INFO: Pod "pod-efabb811-a8e7-48a4-a678-99b6fda62670": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00559248s Apr 22 22:03:24.012: INFO: Pod "pod-efabb811-a8e7-48a4-a678-99b6fda62670": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009260562s STEP: Saw pod success Apr 22 22:03:24.012: INFO: Pod "pod-efabb811-a8e7-48a4-a678-99b6fda62670" satisfied condition "Succeeded or Failed" Apr 22 22:03:24.014: INFO: Trying to get logs from node node2 pod pod-efabb811-a8e7-48a4-a678-99b6fda62670 container test-container: STEP: delete the pod Apr 22 22:03:24.026: INFO: Waiting for pod pod-efabb811-a8e7-48a4-a678-99b6fda62670 to disappear Apr 22 22:03:24.028: INFO: Pod pod-efabb811-a8e7-48a4-a678-99b6fda62670 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:24.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3992" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":563,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:21.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 22 22:03:21.080: INFO: Waiting up to 5m0s for pod "pod-c0ba2743-632d-468a-9457-f6bcef206f93" in namespace "emptydir-981" to be "Succeeded or Failed" Apr 22 22:03:21.082: INFO: Pod "pod-c0ba2743-632d-468a-9457-f6bcef206f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296302ms Apr 22 22:03:23.087: INFO: Pod "pod-c0ba2743-632d-468a-9457-f6bcef206f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007415797s Apr 22 22:03:25.092: INFO: Pod "pod-c0ba2743-632d-468a-9457-f6bcef206f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012093065s STEP: Saw pod success Apr 22 22:03:25.092: INFO: Pod "pod-c0ba2743-632d-468a-9457-f6bcef206f93" satisfied condition "Succeeded or Failed" Apr 22 22:03:25.095: INFO: Trying to get logs from node node1 pod pod-c0ba2743-632d-468a-9457-f6bcef206f93 container test-container: STEP: delete the pod Apr 22 22:03:25.108: INFO: Waiting for pod pod-c0ba2743-632d-468a-9457-f6bcef206f93 to disappear Apr 22 22:03:25.110: INFO: Pod pod-c0ba2743-632d-468a-9457-f6bcef206f93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:25.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-981" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":228,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:24.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 22 22:03:24.087: INFO: Waiting up to 5m0s for pod "pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca" in namespace "emptydir-7660" to be "Succeeded or Failed" Apr 22 22:03:24.090: INFO: Pod "pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.416243ms Apr 22 22:03:26.093: INFO: Pod "pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006269218s Apr 22 22:03:28.096: INFO: Pod "pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009556632s STEP: Saw pod success Apr 22 22:03:28.096: INFO: Pod "pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca" satisfied condition "Succeeded or Failed" Apr 22 22:03:28.099: INFO: Trying to get logs from node node2 pod pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca container test-container: STEP: delete the pod Apr 22 22:03:28.112: INFO: Waiting for pod pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca to disappear Apr 22 22:03:28.114: INFO: Pod pod-00aad4d3-a5fd-4e73-8d02-226cb6a921ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:28.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7660" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":568,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:00:41.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-548 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Apr 22 22:00:41.823: INFO: Found 0 stateful pods, waiting for 3 Apr 22 22:00:51.833: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:00:51.833: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:00:51.833: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 22 22:01:01.834: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:01:01.834: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:01:01.834: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:01:01.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-548 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:01:02.324: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:01:02.324: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:01:02.324: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Apr 22 22:01:12.353: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 22 22:01:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-548 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:01:22.633: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:01:22.633: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:01:22.633: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:01:32.650: INFO: Waiting for StatefulSet statefulset-548/ss2 to complete update Apr 22 22:01:32.650: INFO: Waiting for Pod statefulset-548/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:01:32.650: INFO: Waiting for Pod statefulset-548/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:01:32.650: INFO: Waiting for Pod statefulset-548/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:01:42.659: INFO: Waiting for StatefulSet statefulset-548/ss2 to complete update Apr 22 22:01:42.659: INFO: Waiting for Pod statefulset-548/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:01:42.659: INFO: Waiting for Pod statefulset-548/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:01:52.657: INFO: Waiting for StatefulSet statefulset-548/ss2 to complete update Apr 22 22:01:52.657: INFO: Waiting for Pod statefulset-548/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision Apr 22 22:02:02.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-548 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:02:02.899: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:02:02.899: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:02:02.899: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:02:12.927: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 22 22:02:22.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-548 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:02:23.642: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:02:23.642: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:02:23.642: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:02:33.659: INFO: Waiting for StatefulSet statefulset-548/ss2 to complete update Apr 22 22:02:33.659: INFO: Waiting for Pod statefulset-548/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Apr 22 22:02:33.659: INFO: Waiting for Pod statefulset-548/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Apr 22 22:02:43.668: INFO: Waiting for StatefulSet statefulset-548/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:02:53.667: INFO: Deleting all statefulset in ns statefulset-548 Apr 22 22:02:53.669: INFO: Scaling statefulset ss2 to 0 Apr 22 22:03:33.683: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:03:33.686: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:33.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-548" for this suite. • [SLOW TEST:171.908 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":8,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:33.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3325 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3325 I0422 22:03:33.819086 39 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3325, replica count: 2 I0422 22:03:36.871693 39 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:03:36.871: INFO: Creating new exec pod Apr 22 22:03:41.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3325 exec execpodcqv4q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 22:03:42.150: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 22:03:42.150: INFO: stdout: "" Apr 22 22:03:43.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3325 exec execpodcqv4q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 22:03:43.388: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 22:03:43.388: INFO: stdout: "" Apr 22 22:03:44.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3325 exec execpodcqv4q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 22:03:44.409: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 22:03:44.409: INFO: stdout: "" Apr 22 22:03:45.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3325 exec execpodcqv4q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 22:03:45.400: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 22:03:45.400: INFO: stdout: "externalname-service-v5gph" Apr 22 22:03:45.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3325 exec execpodcqv4q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.50.197 80' Apr 22 22:03:45.636: INFO: stderr: "+ nc -v -t -w 2 10.233.50.197 80\nConnection to 10.233.50.197 80 port [tcp/http] succeeded!\n+ echo hostName\n" Apr 22 22:03:45.636: INFO: stdout: "externalname-service-v5gph" Apr 22 22:03:45.636: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:45.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3325" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.873 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":9,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:25.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-nmk4 STEP: Creating a pod to test atomic-volume-subpath Apr 22 22:03:25.165: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nmk4" in namespace "subpath-1804" to be "Succeeded or Failed" Apr 22 22:03:25.167: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10506ms Apr 22 22:03:27.170: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004929016s Apr 22 22:03:29.176: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010344796s Apr 22 22:03:31.180: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 6.01432921s Apr 22 22:03:33.184: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 8.01812495s Apr 22 22:03:35.188: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 10.023013489s Apr 22 22:03:37.192: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 12.026957224s Apr 22 22:03:39.195: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 14.02982569s Apr 22 22:03:41.200: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 16.0343502s Apr 22 22:03:43.204: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 18.038594829s Apr 22 22:03:45.208: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 20.042672888s Apr 22 22:03:47.213: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Running", Reason="", readiness=true. Elapsed: 22.047230031s Apr 22 22:03:49.217: INFO: Pod "pod-subpath-test-downwardapi-nmk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051832773s STEP: Saw pod success Apr 22 22:03:49.217: INFO: Pod "pod-subpath-test-downwardapi-nmk4" satisfied condition "Succeeded or Failed" Apr 22 22:03:49.220: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-nmk4 container test-container-subpath-downwardapi-nmk4: STEP: delete the pod Apr 22 22:03:49.238: INFO: Waiting for pod pod-subpath-test-downwardapi-nmk4 to disappear Apr 22 22:03:49.241: INFO: Pod pod-subpath-test-downwardapi-nmk4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nmk4 Apr 22 22:03:49.241: INFO: Deleting pod "pod-subpath-test-downwardapi-nmk4" in namespace "subpath-1804" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:49.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1804" for this suite. • [SLOW TEST:24.125 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":230,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:45.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:03:45.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27" in namespace "downward-api-7080" to be "Succeeded or Failed" Apr 22 22:03:45.807: INFO: Pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519228ms Apr 22 22:03:47.812: INFO: Pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009923864s Apr 22 22:03:49.817: INFO: Pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015051539s Apr 22 22:03:51.822: INFO: Pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019392675s STEP: Saw pod success Apr 22 22:03:51.822: INFO: Pod "downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27" satisfied condition "Succeeded or Failed" Apr 22 22:03:51.824: INFO: Trying to get logs from node node1 pod downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27 container client-container: STEP: delete the pod Apr 22 22:03:52.107: INFO: Waiting for pod downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27 to disappear Apr 22 22:03:52.109: INFO: Pod downwardapi-volume-4f9311c5-b9db-41c1-ac16-2fd69f043d27 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:52.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7080" for this suite. • [SLOW TEST:6.350 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:28.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8153 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 22:03:28.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 22:03:28.180: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:03:30.183: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:03:32.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:34.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:36.183: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:38.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:40.185: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:42.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:44.183: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:46.185: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:48.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:03:50.184: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 22:03:50.188: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 22:03:54.211: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 22 22:03:54.211: INFO: Breadth first check of 10.244.3.198 on host 10.10.190.207... Apr 22 22:03:54.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.125:9080/dial?request=hostname&protocol=udp&host=10.244.3.198&port=8081&tries=1'] Namespace:pod-network-test-8153 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:03:54.213: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:03:54.435: INFO: Waiting for responses: map[] Apr 22 22:03:54.435: INFO: reached 10.244.3.198 after 0/1 tries Apr 22 22:03:54.435: INFO: Breadth first check of 10.244.4.121 on host 10.10.190.208... Apr 22 22:03:54.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.125:9080/dial?request=hostname&protocol=udp&host=10.244.4.121&port=8081&tries=1'] Namespace:pod-network-test-8153 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:03:54.440: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:03:54.568: INFO: Waiting for responses: map[] Apr 22 22:03:54.568: INFO: reached 10.244.4.121 after 0/1 tries Apr 22 22:03:54.568: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:54.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8153" for this suite. • [SLOW TEST:26.452 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":569,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:49.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:03:49.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77" in namespace "downward-api-6901" to be "Succeeded or Failed" Apr 22 22:03:49.317: INFO: Pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276164ms Apr 22 22:03:51.321: INFO: Pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008490859s Apr 22 22:03:53.326: INFO: Pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013267532s Apr 22 22:03:55.330: INFO: Pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017441041s STEP: Saw pod success Apr 22 22:03:55.330: INFO: Pod "downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77" satisfied condition "Succeeded or Failed" Apr 22 22:03:55.332: INFO: Trying to get logs from node node2 pod downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77 container client-container: STEP: delete the pod Apr 22 22:03:55.346: INFO: Waiting for pod downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77 to disappear Apr 22 22:03:55.348: INFO: Pod downwardapi-volume-d0dc1f82-c026-4944-9532-49fee8acda77 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:55.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6901" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":241,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:58:35.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-701 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-701 STEP: Creating statefulset with conflicting port in namespace statefulset-701 STEP: Waiting until pod test-pod will start running in namespace statefulset-701 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-701 Apr 22 22:03:45.161: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001688900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001688900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001688900, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:03:45.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-701 describe po test-pod' Apr 22 22:03:45.342: INFO: stderr: "" Apr 22 22:03:45.342: INFO: stdout: "Name: test-pod\nNamespace: statefulset-701\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 22 Apr 2022 21:58:35 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.34\"\n ],\n \"mac\": \"56:5b:22:34:7b:81\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.34\"\n ],\n \"mac\": \"56:5b:22:34:7b:81\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.34\nIPs:\n IP: 10.244.4.34\nContainers:\n webserver:\n Container ID: docker://2dbeb3dd8ed1b926ca4abf2301fb2f391dcc9cf68359762a445b58af6bd72db7\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 22 Apr 2022 21:58:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7mvlm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-7mvlm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m7s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m6s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 368.503078ms\n Normal Created 5m6s kubelet Created container webserver\n Normal Started 5m6s kubelet Started container webserver\n" Apr 22 22:03:45.342: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-701 Priority: 0 Node: node2/10.10.190.208 Start Time: Fri, 22 Apr 2022 21:58:35 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.34" ], "mac": "56:5b:22:34:7b:81", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.34" ], "mac": "56:5b:22:34:7b:81", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.34 IPs: IP: 10.244.4.34 Containers: webserver: Container ID: docker://2dbeb3dd8ed1b926ca4abf2301fb2f391dcc9cf68359762a445b58af6bd72db7 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 22 Apr 2022 21:58:39 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7mvlm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-7mvlm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m7s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m6s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 368.503078ms Normal Created 5m6s kubelet Created container webserver Normal Started 5m6s kubelet Started container webserver Apr 22 22:03:45.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-701 logs test-pod --tail=100' Apr 22 22:03:45.511: INFO: stderr: "" Apr 22 22:03:45.511: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.34. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.34. Set the 'ServerName' directive globally to suppress this message\n[Fri Apr 22 21:58:39.988405 2022] [mpm_event:notice] [pid 1:tid 139917764885352] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Apr 22 21:58:39.988440 2022] [core:notice] [pid 1:tid 139917764885352] AH00094: Command line: 'httpd -D FOREGROUND'\n" Apr 22 22:03:45.511: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.34. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.34. Set the 'ServerName' directive globally to suppress this message [Fri Apr 22 21:58:39.988405 2022] [mpm_event:notice] [pid 1:tid 139917764885352] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Apr 22 21:58:39.988440 2022] [core:notice] [pid 1:tid 139917764885352] AH00094: Command line: 'httpd -D FOREGROUND' Apr 22 22:03:45.511: INFO: Deleting all statefulset in ns statefulset-701 Apr 22 22:03:45.514: INFO: Scaling statefulset ss to 0 Apr 22 22:03:45.527: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:03:55.535: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-701". STEP: Found 7 events. Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:35 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:35 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:35 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:38 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:39 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 368.503078ms Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:39 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver Apr 22 22:03:55.548: INFO: At 2022-04-22 21:58:39 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver Apr 22 22:03:55.551: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:03:55.551: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 21:58:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 21:58:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 21:58:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 21:58:35 +0000 UTC }] Apr 22 22:03:55.551: INFO: Apr 22 22:03:55.555: INFO: Logging node info for node master1 Apr 22 22:03:55.558: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 42515 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:03:49 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:03:55.560: INFO: Logging kubelet events for node master1 Apr 22 22:03:55.562: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:03:55.587: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:03:55.587: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:03:55.587: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:03:55.587: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.587: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:03:55.587: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:03:55.587: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:03:55.587: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:03:55.587: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:03:55.587: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.587: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:03:55.587: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.587: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:03:55.587: INFO: Container nginx ready: true, restart count 0 Apr 22 22:03:55.587: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.587: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.587: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:03:55.691: INFO: Latency metrics for node master1 Apr 22 22:03:55.691: INFO: Logging node info for node master2 Apr 22 22:03:55.695: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 42481 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:03:47 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:03:55.696: INFO: Logging kubelet events for node master2 Apr 22 22:03:55.698: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:03:55.715: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:03:55.715: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:03:55.715: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:03:55.715: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:03:55.715: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container coredns ready: true, restart count 1 Apr 22 22:03:55.715: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:03:55.715: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:03:55.715: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:03:55.715: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.715: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:03:55.715: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.715: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.715: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:03:55.806: INFO: Latency metrics for node master2 Apr 22 22:03:55.806: INFO: Logging node info for node master3 Apr 22 22:03:55.809: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 42477 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:03:55.810: INFO: Logging kubelet events for node master3 Apr 22 22:03:55.812: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:03:55.832: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container coredns ready: true, restart count 1 Apr 22 22:03:55.832: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.832: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:03:55.832: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:03:55.832: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:03:55.832: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:03:55.832: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:03:55.832: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:03:55.832: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:03:55.832: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.832: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:03:55.914: INFO: Latency metrics for node master3 Apr 22 22:03:55.914: INFO: Logging node info for node node1 Apr 22 22:03:55.917: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 42523 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:50 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:50 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:50 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:03:50 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:03:55.917: INFO: Logging kubelet events for node node1 Apr 22 22:03:55.919: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:03:55.936: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:03:55.936: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:03:55.936: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:03:55.936: INFO: forbid-27511080-d76jg started at 2022-04-22 22:00:00 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container c ready: true, restart count 0 Apr 22 22:03:55.936: INFO: ss2-0 started at 2022-04-22 22:03:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container webserver ready: true, restart count 0 Apr 22 22:03:55.936: INFO: affinity-nodeport-transition-rvq2p started at 2022-04-22 22:02:20 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 22 22:03:55.936: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.936: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:03:55.936: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:03:55.936: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:03:55.936: INFO: Container collectd ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.936: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:03:55.936: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:03:55.936: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:03:55.936: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:03:55.936: INFO: Container discover ready: false, restart count 0 Apr 22 22:03:55.936: INFO: Container init ready: false, restart count 0 Apr 22 22:03:55.936: INFO: Container install ready: false, restart count 0 Apr 22 22:03:55.936: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:03:55.936: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container grafana ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:03:55.936: INFO: affinity-nodeport-transition-xzjkx started at 2022-04-22 22:02:20 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 22 22:03:55.936: INFO: affinity-nodeport-7ns5q started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container affinity-nodeport ready: false, restart count 0 Apr 22 22:03:55.936: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:03:55.936: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:03:55.936: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:55.936: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:55.936: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:03:55.936: INFO: netserver-0 started at 2022-04-22 22:03:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container webserver ready: true, restart count 0 Apr 22 22:03:55.936: INFO: ss2-1 started at 2022-04-22 22:03:17 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:55.936: INFO: Container webserver ready: false, restart count 0 Apr 22 22:03:56.434: INFO: Latency metrics for node node1 Apr 22 22:03:56.434: INFO: Logging node info for node node2 Apr 22 22:03:56.437: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 42478 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:03:46 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:03:56.439: INFO: Logging kubelet events for node node2 Apr 22 22:03:56.441: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:03:56.462: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:03:56.462: INFO: affinity-nodeport-transition-bd6hl started at 2022-04-22 22:02:20 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 22 22:03:56.462: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:03:56.462: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:03:56.462: INFO: Container collectd ready: true, restart count 0 Apr 22 22:03:56.462: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:03:56.462: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:03:56.462: INFO: test-container-pod started at 2022-04-22 22:03:50 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container webserver ready: true, restart count 0 Apr 22 22:03:56.462: INFO: pod-37637eab-f26c-4fc6-b093-94cf686484dc started at 2022-04-22 22:03:52 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container test-container ready: false, restart count 0 Apr 22 22:03:56.462: INFO: affinity-nodeport-ks7k5 started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container affinity-nodeport ready: false, restart count 0 Apr 22 22:03:56.462: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:03:56.462: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:03:56.462: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:03:56.462: INFO: Container discover ready: false, restart count 0 Apr 22 22:03:56.462: INFO: Container init ready: false, restart count 0 Apr 22 22:03:56.462: INFO: Container install ready: false, restart count 0 Apr 22 22:03:56.462: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:56.462: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:03:56.462: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:03:56.462: INFO: execpod-affinityzlqjf started at 2022-04-22 22:02:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:03:56.462: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:03:56.462: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:03:56.462: INFO: dns-test-33205816-c58c-4d9b-9698-3b64918e82cb started at 2022-04-22 22:03:55 +0000 UTC (0+3 container statuses recorded) Apr 22 22:03:56.462: INFO: Container jessie-querier ready: false, restart count 0 Apr 22 22:03:56.462: INFO: Container querier ready: false, restart count 0 Apr 22 22:03:56.462: INFO: Container webserver ready: false, restart count 0 Apr 22 22:03:56.462: INFO: busybox-26df426c-8183-43f6-aa25-d63576f35e7f started at 2022-04-22 22:02:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container busybox ready: true, restart count 0 Apr 22 22:03:56.462: INFO: test-pod started at 2022-04-22 21:58:35 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container webserver ready: true, restart count 0 Apr 22 22:03:56.462: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:03:56.462: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:03:56.462: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:03:56.462: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:03:56.462: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:03:56.462: INFO: affinity-nodeport-9r2t4 started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container affinity-nodeport ready: false, restart count 0 Apr 22 22:03:56.462: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.462: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:03:56.462: INFO: liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 started at 2022-04-22 22:01:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.463: INFO: Container agnhost-container ready: false, restart count 4 Apr 22 22:03:56.463: INFO: netserver-1 started at 2022-04-22 22:03:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:03:56.463: INFO: Container webserver ready: true, restart count 0 Apr 22 22:03:57.482: INFO: Latency metrics for node node2 Apr 22 22:03:57.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-701" for this suite. • Failure [322.390 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:03:45.161: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":8,"skipped":197,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:52.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 22 22:03:52.199: INFO: Waiting up to 5m0s for pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc" in namespace "emptydir-2512" to be "Succeeded or Failed" Apr 22 22:03:52.202: INFO: Pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661946ms Apr 22 22:03:54.207: INFO: Pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007652915s Apr 22 22:03:56.211: INFO: Pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011653791s Apr 22 22:03:58.216: INFO: Pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017217406s STEP: Saw pod success Apr 22 22:03:58.216: INFO: Pod "pod-37637eab-f26c-4fc6-b093-94cf686484dc" satisfied condition "Succeeded or Failed" Apr 22 22:03:58.219: INFO: Trying to get logs from node node2 pod pod-37637eab-f26c-4fc6-b093-94cf686484dc container test-container: STEP: delete the pod Apr 22 22:03:58.416: INFO: Waiting for pod pod-37637eab-f26c-4fc6-b093-94cf686484dc to disappear Apr 22 22:03:58.418: INFO: Pod pod-37637eab-f26c-4fc6-b093-94cf686484dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:03:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2512" for this suite. • [SLOW TEST:6.260 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:57.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-bbd040ba-9d08-4c1c-9bbb-da131b65511a STEP: Creating a pod to test consume configMaps Apr 22 22:03:57.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75" in namespace "configmap-4335" to be "Succeeded or Failed" Apr 22 22:03:57.590: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75": Phase="Pending", Reason="", readiness=false. Elapsed: 9.504638ms Apr 22 22:03:59.593: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012971237s Apr 22 22:04:01.598: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01794085s Apr 22 22:04:03.603: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022758346s Apr 22 22:04:05.608: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.027690775s STEP: Saw pod success Apr 22 22:04:05.608: INFO: Pod "pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75" satisfied condition "Succeeded or Failed" Apr 22 22:04:05.611: INFO: Trying to get logs from node node2 pod pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75 container agnhost-container: STEP: delete the pod Apr 22 22:04:05.623: INFO: Waiting for pod pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75 to disappear Apr 22 22:04:05.626: INFO: Pod pod-configmaps-ec570b26-2199-4779-938e-adf5632b2d75 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:05.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4335" for this suite. • [SLOW TEST:8.090 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":219,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:19.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-1536 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Apr 22 22:02:19.397: INFO: Found 0 stateful pods, waiting for 3 Apr 22 22:02:29.403: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:02:29.403: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:02:29.403: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 22 22:02:39.402: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:02:39.403: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:02:39.403: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Apr 22 22:02:39.428: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 22 22:02:49.459: INFO: Updating stateful set ss2 Apr 22 22:02:49.465: INFO: Waiting for Pod statefulset-1536/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Apr 22 22:02:59.488: INFO: Found 1 stateful pods, waiting for 3 Apr 22 22:03:09.492: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:03:09.492: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:03:09.492: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 22 22:03:09.515: INFO: Updating stateful set ss2 Apr 22 22:03:09.520: INFO: Waiting for Pod statefulset-1536/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:03:19.545: INFO: Updating stateful set ss2 Apr 22 22:03:19.551: INFO: Waiting for StatefulSet statefulset-1536/ss2 to complete update Apr 22 22:03:19.551: INFO: Waiting for Pod statefulset-1536/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 22 22:03:29.558: INFO: Waiting for StatefulSet statefulset-1536/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:03:39.559: INFO: Deleting all statefulset in ns statefulset-1536 Apr 22 22:03:39.562: INFO: Scaling statefulset ss2 to 0 Apr 22 22:04:09.578: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:04:09.581: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:09.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1536" for this suite. • [SLOW TEST:110.233 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":6,"skipped":135,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":268,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:58.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:03:58.451: INFO: Creating deployment "webserver-deployment" Apr 22 22:03:58.454: INFO: Waiting for observed generation 1 Apr 22 22:04:00.460: INFO: Waiting for all required pods to come up Apr 22 22:04:00.464: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 22 22:04:08.472: INFO: Waiting for deployment "webserver-deployment" to complete Apr 22 22:04:08.478: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 22 22:04:08.485: INFO: Updating deployment webserver-deployment Apr 22 22:04:08.485: INFO: Waiting for observed generation 2 Apr 22 22:04:10.491: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 22 22:04:10.493: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 22 22:04:10.496: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:04:10.505: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 22 22:04:10.505: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 22 22:04:10.508: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:04:10.513: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 22 22:04:10.513: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 22 22:04:10.521: INFO: Updating deployment webserver-deployment Apr 22 22:04:10.521: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 22 22:04:10.525: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 22 22:04:10.528: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:04:10.534: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6196 8a059375-0e42-4064-b3c8-f92d3488ae71 43115 3 2022-04-22 22:03:58 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0007b78b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-04-22 22:04:08 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-22 22:04:10 +0000 UTC,LastTransitionTime:2022-04-22 22:04:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 22 22:04:10.537: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6196 133433f6-1cb3-40b1-b10d-8c6656bea5e0 43113 3 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8a059375-0e42-4064-b3c8-f92d3488ae71 0xc0007b7ca7 0xc0007b7ca8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a059375-0e42-4064-b3c8-f92d3488ae71\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0007b7d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:04:10.537: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 22 22:04:10.537: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-6196 f3fdf02a-4711-41b0-821a-713b2b484c93 43111 3 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8a059375-0e42-4064-b3c8-f92d3488ae71 0xc0007b7d87 0xc0007b7d88}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a059375-0e42-4064-b3c8-f92d3488ae71\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0007b7df8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:04:10.545: INFO: Pod "webserver-deployment-795d758f88-9vjbh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9vjbh webserver-deployment-795d758f88- deployment-6196 620c46cb-bc9c-49b8-8e54-8f097491cdcb 43074 0 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe028f 0xc001fe02a0}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-trg48,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trg48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-04-22 22:04:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.546: INFO: Pod "webserver-deployment-795d758f88-c9r2l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c9r2l webserver-deployment-795d758f88- deployment-6196 898ba4cb-fb9a-4011-8bec-224b780df459 43090 0 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe046f 0xc001fe0480}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ztqk8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztqk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-04-22 22:04:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.546: INFO: Pod "webserver-deployment-795d758f88-hxpzq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hxpzq webserver-deployment-795d758f88- deployment-6196 f488d037-38a4-429e-92b3-671e09aa253f 43088 0 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe067f 0xc001fe06a0}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gfxgv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-04-22 22:04:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.546: INFO: Pod "webserver-deployment-795d758f88-k8xsg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-k8xsg webserver-deployment-795d758f88- deployment-6196 5503f5e5-b982-4464-a7b1-71c2121fe7ba 43091 0 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe086f 0xc001fe0880}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8jn5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jn5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-04-22 22:04:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.547: INFO: Pod "webserver-deployment-795d758f88-kq8v2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kq8v2 webserver-deployment-795d758f88- deployment-6196 dfc100f0-1218-4ea1-a347-c4a6b07ebc85 43121 0 2022-04-22 22:04:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe0a5f 0xc001fe0a70}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g9tmd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g9tmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.547: INFO: Pod "webserver-deployment-795d758f88-nbthw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nbthw webserver-deployment-795d758f88- deployment-6196 3d8b76f2-9864-4c21-98ca-39a77045ff0c 43069 0 2022-04-22 22:04:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 133433f6-1cb3-40b1-b10d-8c6656bea5e0 0xc001fe0bdf 0xc001fe0bf0}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"133433f6-1cb3-40b1-b10d-8c6656bea5e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-22 22:04:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g7hxd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g7hxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-04-22 22:04:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.547: INFO: Pod "webserver-deployment-847dcfb7fb-6w268" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6w268 webserver-deployment-847dcfb7fb- deployment-6196 a199881f-a11c-4500-ab27-3f96e33c5596 42996 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.131" ], "mac": "86:ee:b6:35:c0:24", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.131" ], "mac": "86:ee:b6:35:c0:24", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe0dcf 0xc001fe0de0}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.131\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c6dgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6dgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.131,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://7930bc45551c94b000da5b80a77c3698aa80b5963328622d18fb22492eca8e47,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.548: INFO: Pod "webserver-deployment-847dcfb7fb-8qqgk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8qqgk webserver-deployment-847dcfb7fb- deployment-6196 99d6fb82-39dc-4118-bc23-166f52910e28 42984 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.206" ], "mac": "ea:80:45:80:25:31", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.206" ], "mac": "ea:80:45:80:25:31", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe0fcf 0xc001fe0ff0}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.206\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-624kt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-624kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.206,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ce99c46116573ae853e41c05adbff351ddf0417a5483095f7ca0cdc51bcf8835,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.548: INFO: Pod "webserver-deployment-847dcfb7fb-bsj4v" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bsj4v webserver-deployment-847dcfb7fb- deployment-6196 63b4845e-526f-40ee-b569-ec729becd797 42979 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.204" ], "mac": "e6:1a:42:13:4b:17", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.204" ], "mac": "e6:1a:42:13:4b:17", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe11df 0xc001fe11f0}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.204\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dwgqx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwgqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.204,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d1577dfc9ba9df4240a8f13eac2fdcd55aab92a73793a2ef9ca4757a85781772,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.549: INFO: Pod "webserver-deployment-847dcfb7fb-fmqxh" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fmqxh webserver-deployment-847dcfb7fb- deployment-6196 3e2a702b-ff5f-49a6-a3e2-e5cbfb752d6f 43024 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.133" ], "mac": "e6:0e:41:a7:d0:2a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.133" ], "mac": "e6:0e:41:a7:d0:2a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe13df 0xc001fe13f0}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gz9j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gz9j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.133,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://3ad31090656453f5b34cafaaa47b1258c199a24219776f72a1851b4259d3cebe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.549: INFO: Pod "webserver-deployment-847dcfb7fb-pbwgl" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pbwgl webserver-deployment-847dcfb7fb- deployment-6196 208f2865-7616-4bee-963f-596264e4ff7c 43119 0 2022-04-22 22:04:10 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe15df 0xc001fe15f0}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9qdnv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qdnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.549: INFO: Pod "webserver-deployment-847dcfb7fb-spf8h" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-spf8h webserver-deployment-847dcfb7fb- deployment-6196 9ee5b56d-767c-44d3-93f4-587fa30102ab 42937 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.202" ], "mac": "c2:c0:ef:db:0a:62", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.202" ], "mac": "c2:c0:ef:db:0a:62", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe174f 0xc001fe1760}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.202\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qhc86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.202,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e43dedb7a32cf6767c5ba73efc7981050f78773a2638e182dd01119b7f1c623e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.550: INFO: Pod "webserver-deployment-847dcfb7fb-t27rs" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-t27rs webserver-deployment-847dcfb7fb- deployment-6196 e1148ef7-321b-4c46-a385-4742b1c949ff 43017 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.134" ], "mac": "9a:0f:9d:d0:3a:61", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.134" ], "mac": "9a:0f:9d:d0:3a:61", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe197f 0xc001fe1990}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-65ndn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65ndn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.134,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://51f56b86ea330ea26eb75161d23524a0be28a9b3e7b1796dd294e66e09fb3ad8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.550: INFO: Pod "webserver-deployment-847dcfb7fb-txqp9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-txqp9 webserver-deployment-847dcfb7fb- deployment-6196 df2a2810-ac55-4abe-8ca1-697640449f1b 42993 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.132" ], "mac": "ee:18:fc:c4:c4:96", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.132" ], "mac": "ee:18:fc:c4:c4:96", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe1b7f 0xc001fe1b90}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2ffwf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2ffwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.132,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b59d15135111bb5c94a9c1cd7233da96ad674ef54e6e03f497487d6e0555f7f0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 22:04:10.550: INFO: Pod "webserver-deployment-847dcfb7fb-wh88m" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wh88m webserver-deployment-847dcfb7fb- deployment-6196 315c6179-12fb-4a77-8e50-b49a395d7089 42947 0 2022-04-22 22:03:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.203" ], "mac": "ee:96:eb:d8:dd:b5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.203" ], "mac": "ee:96:eb:d8:dd:b5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f3fdf02a-4711-41b0-821a-713b2b484c93 0xc001fe1d7f 0xc001fe1d90}] [] [{kube-controller-manager Update v1 2022-04-22 22:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3fdf02a-4711-41b0-821a-713b2b484c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.203\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g8f8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8f8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.203,StartTime:2022-04-22 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://cc02935e5741ce6ed56ae2dcce30e39c51622c04a701fc74d4811bbd4a59312b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.203,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:10.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6196" for this suite. • [SLOW TEST:12.133 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":268,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:05.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-92f7e3af-226d-4239-9436-d7fd73464cca STEP: Creating a pod to test consume secrets Apr 22 22:04:05.748: INFO: Waiting up to 5m0s for pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552" in namespace "secrets-6822" to be "Succeeded or Failed" Apr 22 22:04:05.750: INFO: Pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477666ms Apr 22 22:04:07.754: INFO: Pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006014257s Apr 22 22:04:09.757: INFO: Pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009459652s Apr 22 22:04:11.761: INFO: Pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0135182s STEP: Saw pod success Apr 22 22:04:11.761: INFO: Pod "pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552" satisfied condition "Succeeded or Failed" Apr 22 22:04:11.766: INFO: Trying to get logs from node node1 pod pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552 container secret-env-test: STEP: delete the pod Apr 22 22:04:11.777: INFO: Waiting for pod pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552 to disappear Apr 22 22:04:11.779: INFO: Pod pod-secrets-57c5ff41-a2c8-4d9a-894c-786ee101a552 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:11.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6822" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":255,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:55.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7413.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7413.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7413.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7413.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.49.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.49.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.49.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.49.190_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7413.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7413.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7413.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7413.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7413.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7413.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.49.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.49.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.49.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.49.190_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:07.423: INFO: Unable to read wheezy_udp@dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.426: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.432: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.450: INFO: Unable to read jessie_udp@dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.453: INFO: Unable to read jessie_tcp@dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.455: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.457: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local from pod dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb: the server could not find the requested resource (get pods dns-test-33205816-c58c-4d9b-9698-3b64918e82cb) Apr 22 22:04:07.473: INFO: Lookups using dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb failed for: [wheezy_udp@dns-test-service.dns-7413.svc.cluster.local wheezy_tcp@dns-test-service.dns-7413.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local jessie_udp@dns-test-service.dns-7413.svc.cluster.local jessie_tcp@dns-test-service.dns-7413.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7413.svc.cluster.local] Apr 22 22:04:12.523: INFO: DNS probes using dns-7413/dns-test-33205816-c58c-4d9b-9698-3b64918e82cb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:12.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7413" for this suite. • [SLOW TEST:17.189 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":14,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:10.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:10.613: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:18.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3463" for this suite. • [SLOW TEST:8.140 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:01:30.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 in namespace container-probe-533 Apr 22 22:01:36.596: INFO: Started pod liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 in namespace container-probe-533 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 22:01:36.600: INFO: Initial restart count of pod liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is 0 Apr 22 22:01:52.636: INFO: Restart count of pod container-probe-533/liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is now 1 (16.036617129s elapsed) Apr 22 22:02:14.685: INFO: Restart count of pod container-probe-533/liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is now 2 (38.085557291s elapsed) Apr 22 22:02:34.734: INFO: Restart count of pod container-probe-533/liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is now 3 (58.133899169s elapsed) Apr 22 22:03:20.848: INFO: Restart count of pod container-probe-533/liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is now 4 (1m44.248328699s elapsed) Apr 22 22:04:18.969: INFO: Restart count of pod container-probe-533/liveness-5bc5184b-ff3a-4773-9ef8-469075f1f563 is now 5 (2m42.369612848s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:18.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-533" for this suite. • [SLOW TEST:168.433 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":288,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:09.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 22 22:04:09.646: INFO: Waiting up to 5m0s for pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4" in namespace "emptydir-735" to be "Succeeded or Failed" Apr 22 22:04:09.651: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.050985ms Apr 22 22:04:11.654: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007996314s Apr 22 22:04:13.659: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012448331s Apr 22 22:04:15.662: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016044954s Apr 22 22:04:17.665: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019062011s Apr 22 22:04:19.674: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027834194s STEP: Saw pod success Apr 22 22:04:19.674: INFO: Pod "pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4" satisfied condition "Succeeded or Failed" Apr 22 22:04:19.677: INFO: Trying to get logs from node node2 pod pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4 container test-container: STEP: delete the pod Apr 22 22:04:19.770: INFO: Waiting for pod pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4 to disappear Apr 22 22:04:19.772: INFO: Pod pod-9ad8079d-2863-4cc8-b5b3-bdfa6e9ccbb4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:19.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-735" for this suite. • [SLOW TEST:10.168 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":139,"failed":0} [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:19.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Apr 22 22:04:19.805: INFO: Major version: 1 STEP: Confirm minor version Apr 22 22:04:19.805: INFO: cleanMinorVersion: 21 Apr 22 22:04:19.805: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:19.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-6277" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":8,"skipped":139,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:11.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 22 22:04:11.847: INFO: Waiting up to 5m0s for pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4" in namespace "emptydir-2037" to be "Succeeded or Failed" Apr 22 22:04:11.849: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029238ms Apr 22 22:04:13.852: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005470672s Apr 22 22:04:15.856: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009711047s Apr 22 22:04:17.860: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013431711s Apr 22 22:04:19.866: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019189557s Apr 22 22:04:21.871: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024179225s STEP: Saw pod success Apr 22 22:04:21.871: INFO: Pod "pod-4a67181c-e688-4213-8b5f-114dc62be6a4" satisfied condition "Succeeded or Failed" Apr 22 22:04:21.873: INFO: Trying to get logs from node node2 pod pod-4a67181c-e688-4213-8b5f-114dc62be6a4 container test-container: STEP: delete the pod Apr 22 22:04:21.988: INFO: Waiting for pod pod-4a67181c-e688-4213-8b5f-114dc62be6a4 to disappear Apr 22 22:04:21.990: INFO: Pod pod-4a67181c-e688-4213-8b5f-114dc62be6a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:21.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2037" for this suite. • [SLOW TEST:10.184 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":267,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:22.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-d16492cf-180a-4584-b130-a8420e1f24c5 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:22.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9948" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":12,"skipped":278,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:12.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:04:12.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13" in namespace "projected-5199" to be "Succeeded or Failed" Apr 22 22:04:12.681: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 1.980039ms Apr 22 22:04:14.685: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006149353s Apr 22 22:04:16.690: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010579599s Apr 22 22:04:18.694: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015078661s Apr 22 22:04:20.700: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Running", Reason="", readiness=true. Elapsed: 8.020326807s Apr 22 22:04:22.703: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Running", Reason="", readiness=true. Elapsed: 10.023413924s Apr 22 22:04:24.707: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.02744214s STEP: Saw pod success Apr 22 22:04:24.707: INFO: Pod "downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13" satisfied condition "Succeeded or Failed" Apr 22 22:04:24.709: INFO: Trying to get logs from node node2 pod downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13 container client-container: STEP: delete the pod Apr 22 22:04:24.721: INFO: Waiting for pod downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13 to disappear Apr 22 22:04:24.723: INFO: Pod downwardapi-volume-ac9c485c-60c9-459f-8bd8-37aceb06dc13 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:24.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5199" for this suite. • [SLOW TEST:12.083 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":287,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:24.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 22 22:04:25.264: INFO: starting watch STEP: patching STEP: updating Apr 22 22:04:25.274: INFO: waiting for watch events with expected annotations Apr 22 22:04:25.274: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:25.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8675" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":16,"skipped":297,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:19.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Apr 22 22:04:29.936: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 22:04:30.009: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 22 22:04:30.009: INFO: Deleting pod "simpletest-rc-to-be-deleted-24774" in namespace "gc-5296" Apr 22 22:04:30.016: INFO: Deleting pod "simpletest-rc-to-be-deleted-2tm86" in namespace "gc-5296" Apr 22 22:04:30.021: INFO: Deleting pod "simpletest-rc-to-be-deleted-6sczc" in namespace "gc-5296" Apr 22 22:04:30.026: INFO: Deleting pod "simpletest-rc-to-be-deleted-96mb9" in namespace "gc-5296" Apr 22 22:04:30.033: INFO: Deleting pod "simpletest-rc-to-be-deleted-bv727" in namespace "gc-5296" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:30.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5296" for this suite. • [SLOW TEST:10.211 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":148,"failed":0} SSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":13,"skipped":279,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:18.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:18.756: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 22 22:04:18.762: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 22:04:23.765: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 22 22:04:27.771: INFO: Creating deployment "test-rolling-update-deployment" Apr 22 22:04:27.774: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 22 22:04:27.779: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 22 22:04:29.785: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 22 22:04:29.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:31.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261867, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:33.790: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 22 22:04:33.798: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5379 c6b05606-37e8-44cb-8f28-a26c3f4ad492 44004 1 2022-04-22 22:04:27 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-22 22:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001abb9f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-22 22:04:27 +0000 UTC,LastTransitionTime:2022-04-22 22:04:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-04-22 22:04:32 +0000 UTC,LastTransitionTime:2022-04-22 22:04:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 22:04:33.801: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-5379 5a469aa6-659b-42a8-9c01-d8d752608520 43995 1 2022-04-22 22:04:27 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c6b05606-37e8-44cb-8f28-a26c3f4ad492 0xc001abbe87 0xc001abbe88}] [] [{kube-controller-manager Update apps/v1 2022-04-22 22:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b05606-37e8-44cb-8f28-a26c3f4ad492\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001abbf18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:04:33.801: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 22 22:04:33.801: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5379 bd85134a-a12d-4c8a-8c6b-6242fcd0fa61 44003 2 2022-04-22 22:04:18 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c6b05606-37e8-44cb-8f28-a26c3f4ad492 0xc001abbd77 0xc001abbd78}] [] [{e2e.test Update apps/v1 2022-04-22 22:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-22 22:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b05606-37e8-44cb-8f28-a26c3f4ad492\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001abbe18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 22:04:33.805: INFO: Pod "test-rolling-update-deployment-585b757574-7bck9" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-7bck9 test-rolling-update-deployment-585b757574- deployment-5379 8b1e596b-4482-4479-9b54-de65dac29bd0 43994 0 2022-04-22 22:04:27 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.222" ], "mac": "e2:e1:5f:3f:de:98", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.222" ], "mac": "e2:e1:5f:3f:de:98", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 5a469aa6-659b-42a8-9c01-d8d752608520 0xc00320e32f 0xc00320e340}] [] [{kube-controller-manager Update v1 2022-04-22 22:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a469aa6-659b-42a8-9c01-d8d752608520\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:04:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bbrl8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbrl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:04:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.222,StartTime:2022-04-22 22:04:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:04:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://d71687235950ea1032a09b0a21240758426e350218f584be532d9a5b7a12667d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:33.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5379" for this suite. • [SLOW TEST:15.078 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":14,"skipped":279,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:33.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 22 22:04:33.858: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8384 13830fe3-c6c6-4009-873e-16b7c1fd9c31 44023 0 2022-04-22 22:04:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 22:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:04:33.858: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8384 13830fe3-c6c6-4009-873e-16b7c1fd9c31 44024 0 2022-04-22 22:04:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 22:04:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 22 22:04:33.875: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8384 13830fe3-c6c6-4009-873e-16b7c1fd9c31 44025 0 2022-04-22 22:04:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 22:04:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:04:33.876: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8384 13830fe3-c6c6-4009-873e-16b7c1fd9c31 44026 0 2022-04-22 22:04:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 22:04:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:33.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8384" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":15,"skipped":283,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:30.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Apr 22 22:04:30.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8191 create -f -' Apr 22 22:04:30.456: INFO: stderr: "" Apr 22 22:04:30.456: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 22 22:04:31.460: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:31.460: INFO: Found 0 / 1 Apr 22 22:04:32.459: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:32.459: INFO: Found 0 / 1 Apr 22 22:04:33.459: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:33.459: INFO: Found 0 / 1 Apr 22 22:04:34.459: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:34.459: INFO: Found 0 / 1 Apr 22 22:04:35.459: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:35.459: INFO: Found 0 / 1 Apr 22 22:04:36.460: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:36.460: INFO: Found 1 / 1 Apr 22 22:04:36.460: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 22 22:04:36.462: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:36.462: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 22:04:36.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8191 patch pod agnhost-primary-z4vqv -p {"metadata":{"annotations":{"x":"y"}}}' Apr 22 22:04:36.619: INFO: stderr: "" Apr 22 22:04:36.619: INFO: stdout: "pod/agnhost-primary-z4vqv patched\n" STEP: checking annotations Apr 22 22:04:36.622: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 22:04:36.622: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:36.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8191" for this suite. • [SLOW TEST:6.561 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":10,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:36.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1175" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:22.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:22.101: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 22 22:04:30.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 create -f -' Apr 22 22:04:31.264: INFO: stderr: "" Apr 22 22:04:31.264: INFO: stdout: "e2e-test-crd-publish-openapi-148-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 22:04:31.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 delete e2e-test-crd-publish-openapi-148-crds test-foo' Apr 22 22:04:31.440: INFO: stderr: "" Apr 22 22:04:31.441: INFO: stdout: "e2e-test-crd-publish-openapi-148-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 22 22:04:31.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 apply -f -' Apr 22 22:04:31.839: INFO: stderr: "" Apr 22 22:04:31.839: INFO: stdout: "e2e-test-crd-publish-openapi-148-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 22:04:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 delete e2e-test-crd-publish-openapi-148-crds test-foo' Apr 22 22:04:31.996: INFO: stderr: "" Apr 22 22:04:31.996: INFO: stdout: "e2e-test-crd-publish-openapi-148-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 22 22:04:31.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 create -f -' Apr 22 22:04:32.305: INFO: rc: 1 Apr 22 22:04:32.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 apply -f -' Apr 22 22:04:32.609: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 22 22:04:32.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 create -f -' Apr 22 22:04:32.913: INFO: rc: 1 Apr 22 22:04:32.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 --namespace=crd-publish-openapi-894 apply -f -' Apr 22 22:04:33.220: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 22 22:04:33.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 explain e2e-test-crd-publish-openapi-148-crds' Apr 22 22:04:33.548: INFO: stderr: "" Apr 22 22:04:33.548: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-148-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 22 22:04:33.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 explain e2e-test-crd-publish-openapi-148-crds.metadata' Apr 22 22:04:33.912: INFO: stderr: "" Apr 22 22:04:33.912: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-148-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 22 22:04:33.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 explain e2e-test-crd-publish-openapi-148-crds.spec' Apr 22 22:04:34.276: INFO: stderr: "" Apr 22 22:04:34.276: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-148-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 22 22:04:34.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 explain e2e-test-crd-publish-openapi-148-crds.spec.bars' Apr 22 22:04:34.623: INFO: stderr: "" Apr 22 22:04:34.623: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-148-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 22 22:04:34.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-894 explain e2e-test-crd-publish-openapi-148-crds.spec.bars2' Apr 22 22:04:34.918: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:38.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-894" for this suite. • [SLOW TEST:16.505 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":13,"skipped":285,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:25.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 22 22:04:25.667: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 22 22:04:27.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:29.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261865, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:04:32.689: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:32.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:40.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7877" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.448 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":17,"skipped":303,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:40.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:40.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9271" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":18,"skipped":309,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:33.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:40.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8568" for this suite. • [SLOW TEST:7.038 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":16,"skipped":297,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:38.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Apr 22 22:04:38.686: INFO: Waiting up to 5m0s for pod "test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e" in namespace "svcaccounts-1379" to be "Succeeded or Failed" Apr 22 22:04:38.688: INFO: Pod "test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279508ms Apr 22 22:04:40.693: INFO: Pod "test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007478655s Apr 22 22:04:42.698: INFO: Pod "test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01239319s STEP: Saw pod success Apr 22 22:04:42.698: INFO: Pod "test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e" satisfied condition "Succeeded or Failed" Apr 22 22:04:42.701: INFO: Trying to get logs from node node2 pod test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e container agnhost-container: STEP: delete the pod Apr 22 22:04:42.716: INFO: Waiting for pod test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e to disappear Apr 22 22:04:42.718: INFO: Pod test-pod-e95d850f-3d37-4d30-bf17-f39a3bff963e no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:42.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1379" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":14,"skipped":316,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:36.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 22 22:04:36.850: INFO: The status of Pod annotationupdate1a147010-3aae-4701-b547-ad9f9e47a8a8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:04:38.853: INFO: The status of Pod annotationupdate1a147010-3aae-4701-b547-ad9f9e47a8a8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:04:40.854: INFO: The status of Pod annotationupdate1a147010-3aae-4701-b547-ad9f9e47a8a8 is Running (Ready = true) Apr 22 22:04:41.380: INFO: Successfully updated pod "annotationupdate1a147010-3aae-4701-b547-ad9f9e47a8a8" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:45.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8049" for this suite. • [SLOW TEST:8.707 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:42.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8651.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8651.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8651.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8651.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8651.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8651.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:46.813: INFO: DNS probes using dns-8651/dns-test-46a1f778-09c9-4ccf-997b-f58cf208cd69 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:46.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8651" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":320,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:40.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 22 22:04:40.996: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 22 22:04:46.001: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:47.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7625" for this suite. • [SLOW TEST:6.052 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":17,"skipped":305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:18.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:31.054: INFO: DNS probes using dns-test-f4be2b40-4b21-4c96-9f31-aafa2c074382 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:39.089: INFO: DNS probes using dns-test-93d6b6ad-304c-484e-a937-27410ec6b3d3 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4959.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4959.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:04:47.129: INFO: DNS probes using dns-test-b9d01990-bbe7-4c6d-899c-b5244b89d241 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:47.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4959" for this suite. • [SLOW TEST:28.151 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":32,"skipped":292,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:47.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:47.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1703" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":33,"skipped":304,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:47.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Apr 22 22:04:47.773: INFO: created pod pod-service-account-defaultsa Apr 22 22:04:47.773: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 22 22:04:47.782: INFO: created pod pod-service-account-mountsa Apr 22 22:04:47.782: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 22 22:04:47.792: INFO: created pod pod-service-account-nomountsa Apr 22 22:04:47.792: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 22 22:04:47.802: INFO: created pod pod-service-account-defaultsa-mountspec Apr 22 22:04:47.802: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 22 22:04:47.811: INFO: created pod pod-service-account-mountsa-mountspec Apr 22 22:04:47.811: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 22 22:04:47.820: INFO: created pod pod-service-account-nomountsa-mountspec Apr 22 22:04:47.820: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 22 22:04:47.829: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 22 22:04:47.829: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 22 22:04:47.837: INFO: created pod pod-service-account-mountsa-nomountspec Apr 22 22:04:47.838: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 22 22:04:47.846: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 22 22:04:47.846: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:47.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9679" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":34,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:45.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 22 22:04:45.693: INFO: Waiting up to 5m0s for pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760" in namespace "emptydir-6653" to be "Succeeded or Failed" Apr 22 22:04:45.695: INFO: Pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163722ms Apr 22 22:04:47.700: INFO: Pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475256s Apr 22 22:04:49.704: INFO: Pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010788553s Apr 22 22:04:51.708: INFO: Pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015189736s STEP: Saw pod success Apr 22 22:04:51.708: INFO: Pod "pod-0f8f421b-2e9a-408d-96b5-78cae365c760" satisfied condition "Succeeded or Failed" Apr 22 22:04:51.710: INFO: Trying to get logs from node node1 pod pod-0f8f421b-2e9a-408d-96b5-78cae365c760 container test-container: STEP: delete the pod Apr 22 22:04:51.726: INFO: Waiting for pod pod-0f8f421b-2e9a-408d-96b5-78cae365c760 to disappear Apr 22 22:04:51.728: INFO: Pod pod-0f8f421b-2e9a-408d-96b5-78cae365c760 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:51.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6653" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:20.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6129 STEP: creating service affinity-nodeport-transition in namespace services-6129 STEP: creating replication controller affinity-nodeport-transition in namespace services-6129 I0422 22:02:20.895128 36 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6129, replica count: 3 I0422 22:02:23.946391 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:02:26.946875 36 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:02:26.957: INFO: Creating new exec pod Apr 22 22:02:33.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Apr 22 22:02:34.261: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Apr 22 22:02:34.261: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:02:34.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.51.16 80' Apr 22 22:02:34.505: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.51.16 80\nConnection to 10.233.51.16 80 port [tcp/http] succeeded!\n" Apr 22 22:02:34.505: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:02:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:34.836: INFO: rc: 1 Apr 22 22:02:34.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:35.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:36.100: INFO: rc: 1 Apr 22 22:02:36.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:36.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:37.115: INFO: rc: 1 Apr 22 22:02:37.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:37.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:38.431: INFO: rc: 1 Apr 22 22:02:38.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:38.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:39.096: INFO: rc: 1 Apr 22 22:02:39.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:39.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:40.108: INFO: rc: 1 Apr 22 22:02:40.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:40.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:41.102: INFO: rc: 1 Apr 22 22:02:41.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:42.077: INFO: rc: 1 Apr 22 22:02:42.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:42.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:43.316: INFO: rc: 1 Apr 22 22:02:43.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:43.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:44.075: INFO: rc: 1 Apr 22 22:02:44.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:44.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:45.106: INFO: rc: 1 Apr 22 22:02:45.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:45.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:46.111: INFO: rc: 1 Apr 22 22:02:46.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:46.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:47.117: INFO: rc: 1 Apr 22 22:02:47.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:47.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:48.081: INFO: rc: 1 Apr 22 22:02:48.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:48.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:49.090: INFO: rc: 1 Apr 22 22:02:49.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:49.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:50.102: INFO: rc: 1 Apr 22 22:02:50.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:50.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:51.071: INFO: rc: 1 Apr 22 22:02:51.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:51.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:52.068: INFO: rc: 1 Apr 22 22:02:52.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:52.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:53.085: INFO: rc: 1 Apr 22 22:02:53.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:53.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:54.166: INFO: rc: 1 Apr 22 22:02:54.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:54.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:55.214: INFO: rc: 1 Apr 22 22:02:55.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:55.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:56.092: INFO: rc: 1 Apr 22 22:02:56.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:56.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:57.078: INFO: rc: 1 Apr 22 22:02:57.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:57.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:58.081: INFO: rc: 1 Apr 22 22:02:58.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:02:58.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:02:59.842: INFO: rc: 1 Apr 22 22:02:59.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:00.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:01.098: INFO: rc: 1 Apr 22 22:03:01.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:01.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:02.083: INFO: rc: 1 Apr 22 22:03:02.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:02.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:03.052: INFO: rc: 1 Apr 22 22:03:03.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:03.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:04.222: INFO: rc: 1 Apr 22 22:03:04.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:04.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:05.168: INFO: rc: 1 Apr 22 22:03:05.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:05.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:06.220: INFO: rc: 1 Apr 22 22:03:06.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:06.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:07.314: INFO: rc: 1 Apr 22 22:03:07.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:07.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:08.432: INFO: rc: 1 Apr 22 22:03:08.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:08.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:09.090: INFO: rc: 1 Apr 22 22:03:09.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:09.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:10.329: INFO: rc: 1 Apr 22 22:03:10.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:10.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:11.092: INFO: rc: 1 Apr 22 22:03:11.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:11.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:12.110: INFO: rc: 1 Apr 22 22:03:12.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:12.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:13.128: INFO: rc: 1 Apr 22 22:03:13.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:13.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:14.122: INFO: rc: 1 Apr 22 22:03:14.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:14.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:15.093: INFO: rc: 1 Apr 22 22:03:15.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:15.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:16.095: INFO: rc: 1 Apr 22 22:03:16.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:16.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:17.254: INFO: rc: 1 Apr 22 22:03:17.254: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:17.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:18.097: INFO: rc: 1 Apr 22 22:03:18.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:18.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:19.107: INFO: rc: 1 Apr 22 22:03:19.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:19.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:20.093: INFO: rc: 1 Apr 22 22:03:20.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:20.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:21.481: INFO: rc: 1 Apr 22 22:03:21.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:21.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:22.086: INFO: rc: 1 Apr 22 22:03:22.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + nc -v+ -t -w 2 10.10.190.207 31691 echo hostName nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:22.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:23.285: INFO: rc: 1 Apr 22 22:03:23.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:23.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:24.085: INFO: rc: 1 Apr 22 22:03:24.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:24.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:25.543: INFO: rc: 1 Apr 22 22:03:25.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:25.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:26.262: INFO: rc: 1 Apr 22 22:03:26.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:26.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:27.195: INFO: rc: 1 Apr 22 22:03:27.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:27.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:28.094: INFO: rc: 1 Apr 22 22:03:28.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:29.884: INFO: rc: 1 Apr 22 22:03:29.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:30.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:31.083: INFO: rc: 1 Apr 22 22:03:31.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:31.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:32.089: INFO: rc: 1 Apr 22 22:03:32.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:32.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:33.102: INFO: rc: 1 Apr 22 22:03:33.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:33.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:34.081: INFO: rc: 1 Apr 22 22:03:34.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:34.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:35.304: INFO: rc: 1 Apr 22 22:03:35.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:35.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:36.128: INFO: rc: 1 Apr 22 22:03:36.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:36.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:37.109: INFO: rc: 1 Apr 22 22:03:37.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:37.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:38.359: INFO: rc: 1 Apr 22 22:03:38.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31691 + echo hostName nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:38.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:39.096: INFO: rc: 1 Apr 22 22:03:39.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:39.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:40.132: INFO: rc: 1 Apr 22 22:03:40.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:41.090: INFO: rc: 1 Apr 22 22:03:41.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:42.098: INFO: rc: 1 Apr 22 22:03:42.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:42.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:43.094: INFO: rc: 1 Apr 22 22:03:43.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:43.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:44.062: INFO: rc: 1 Apr 22 22:03:44.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:44.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:45.077: INFO: rc: 1 Apr 22 22:03:45.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:45.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:46.083: INFO: rc: 1 Apr 22 22:03:46.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:46.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:47.101: INFO: rc: 1 Apr 22 22:03:47.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:47.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:48.095: INFO: rc: 1 Apr 22 22:03:48.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:48.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:49.139: INFO: rc: 1 Apr 22 22:03:49.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:49.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:50.110: INFO: rc: 1 Apr 22 22:03:50.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:50.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:51.496: INFO: rc: 1 Apr 22 22:03:51.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:51.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:52.330: INFO: rc: 1 Apr 22 22:03:52.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:52.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:53.138: INFO: rc: 1 Apr 22 22:03:53.138: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:53.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:54.340: INFO: rc: 1 Apr 22 22:03:54.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:54.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:55.198: INFO: rc: 1 Apr 22 22:03:55.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:55.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:56.517: INFO: rc: 1 Apr 22 22:03:56.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:56.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:57.447: INFO: rc: 1 Apr 22 22:03:57.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:57.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:03:58.428: INFO: rc: 1 Apr 22 22:03:58.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:03:58.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:00.330: INFO: rc: 1 Apr 22 22:04:00.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:00.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:01.602: INFO: rc: 1 Apr 22 22:04:01.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:01.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:02.499: INFO: rc: 1 Apr 22 22:04:02.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:02.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:03.613: INFO: rc: 1 Apr 22 22:04:03.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:03.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:04.199: INFO: rc: 1 Apr 22 22:04:04.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:04.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:05.105: INFO: rc: 1 Apr 22 22:04:05.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:05.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:06.080: INFO: rc: 1 Apr 22 22:04:06.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:06.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:07.087: INFO: rc: 1 Apr 22 22:04:07.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:07.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:08.073: INFO: rc: 1 Apr 22 22:04:08.073: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:08.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:09.087: INFO: rc: 1 Apr 22 22:04:09.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:09.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:10.348: INFO: rc: 1 Apr 22 22:04:10.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:10.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:11.411: INFO: rc: 1 Apr 22 22:04:11.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:11.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:12.314: INFO: rc: 1 Apr 22 22:04:12.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:12.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:13.266: INFO: rc: 1 Apr 22 22:04:13.266: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:13.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:14.251: INFO: rc: 1 Apr 22 22:04:14.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:14.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:15.479: INFO: rc: 1 Apr 22 22:04:15.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:15.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:16.285: INFO: rc: 1 Apr 22 22:04:16.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:16.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:17.260: INFO: rc: 1 Apr 22 22:04:17.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:17.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:18.220: INFO: rc: 1 Apr 22 22:04:18.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:18.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:19.162: INFO: rc: 1 Apr 22 22:04:19.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:19.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:20.441: INFO: rc: 1 Apr 22 22:04:20.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:20.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:21.129: INFO: rc: 1 Apr 22 22:04:21.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:21.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:22.160: INFO: rc: 1 Apr 22 22:04:22.160: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:22.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:23.227: INFO: rc: 1 Apr 22 22:04:23.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:23.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:24.853: INFO: rc: 1 Apr 22 22:04:24.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:25.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:26.284: INFO: rc: 1 Apr 22 22:04:26.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:26.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:27.141: INFO: rc: 1 Apr 22 22:04:27.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:27.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:28.191: INFO: rc: 1 Apr 22 22:04:28.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:29.773: INFO: rc: 1 Apr 22 22:04:29.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:29.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:30.096: INFO: rc: 1 Apr 22 22:04:30.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:30.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:31.131: INFO: rc: 1 Apr 22 22:04:31.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:31.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:32.258: INFO: rc: 1 Apr 22 22:04:32.258: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:32.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:33.721: INFO: rc: 1 Apr 22 22:04:33.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:33.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:34.291: INFO: rc: 1 Apr 22 22:04:34.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:34.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:35.307: INFO: rc: 1 Apr 22 22:04:35.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:35.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691' Apr 22 22:04:35.587: INFO: rc: 1 Apr 22 22:04:35.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6129 exec execpod-affinityzlqjf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31691: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31691 nc: connect to 10.10.190.207 port 31691 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:35.588: FAIL: Unexpected error: <*errors.errorString | 0xc005aa0420>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31691 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31691 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000b666e0, 0x77b33d8, 0xc000cdd080, 0xc000bf0c80, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a05e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001a05e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001a05e00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 22 22:04:35.589: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6129, will wait for the garbage collector to delete the pods Apr 22 22:04:35.654: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.951056ms Apr 22 22:04:35.754: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.302597ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6129". STEP: Found 27 events. Apr 22 22:04:51.071: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-bd6hl: { } Scheduled: Successfully assigned services-6129/affinity-nodeport-transition-bd6hl to node2 Apr 22 22:04:51.071: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-rvq2p: { } Scheduled: Successfully assigned services-6129/affinity-nodeport-transition-rvq2p to node1 Apr 22 22:04:51.071: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-xzjkx: { } Scheduled: Successfully assigned services-6129/affinity-nodeport-transition-xzjkx to node1 Apr 22 22:04:51.071: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityzlqjf: { } Scheduled: Successfully assigned services-6129/execpod-affinityzlqjf to node2 Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-xzjkx Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-rvq2p Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-bd6hl Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:22 +0000 UTC - event for affinity-nodeport-transition-rvq2p: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:22 +0000 UTC - event for affinity-nodeport-transition-rvq2p: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 328.361478ms Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-bd6hl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-rvq2p: {kubelet node1} Created: Created container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-rvq2p: {kubelet node1} Started: Started container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-xzjkx: {kubelet node1} Created: Created container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-xzjkx: {kubelet node1} Started: Started container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-xzjkx: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 284.741147ms Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:23 +0000 UTC - event for affinity-nodeport-transition-xzjkx: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:24 +0000 UTC - event for affinity-nodeport-transition-bd6hl: {kubelet node2} Created: Created container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:24 +0000 UTC - event for affinity-nodeport-transition-bd6hl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 394.979928ms Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:25 +0000 UTC - event for affinity-nodeport-transition-bd6hl: {kubelet node2} Started: Started container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:30 +0000 UTC - event for execpod-affinityzlqjf: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:30 +0000 UTC - event for execpod-affinityzlqjf: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 352.7121ms Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:31 +0000 UTC - event for execpod-affinityzlqjf: {kubelet node2} Created: Created container agnhost-container Apr 22 22:04:51.071: INFO: At 2022-04-22 22:02:31 +0000 UTC - event for execpod-affinityzlqjf: {kubelet node2} Started: Started container agnhost-container Apr 22 22:04:51.071: INFO: At 2022-04-22 22:04:35 +0000 UTC - event for affinity-nodeport-transition-bd6hl: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:04:35 +0000 UTC - event for affinity-nodeport-transition-rvq2p: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:04:35 +0000 UTC - event for affinity-nodeport-transition-xzjkx: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Apr 22 22:04:51.071: INFO: At 2022-04-22 22:04:35 +0000 UTC - event for execpod-affinityzlqjf: {kubelet node2} Killing: Stopping container agnhost-container Apr 22 22:04:51.073: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:04:51.073: INFO: Apr 22 22:04:51.078: INFO: Logging node info for node master1 Apr 22 22:04:51.080: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 44619 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:49 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:04:49 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:04:51.081: INFO: Logging kubelet events for node master1 Apr 22 22:04:51.083: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:04:51.104: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:04:51.104: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:04:51.104: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:04:51.104: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:04:51.104: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.104: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:04:51.104: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.104: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:04:51.104: INFO: Container nginx ready: true, restart count 0 Apr 22 22:04:51.104: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.104: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:04:51.104: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:04:51.104: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:04:51.104: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:04:51.104: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.104: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:04:51.198: INFO: Latency metrics for node master1 Apr 22 22:04:51.198: INFO: Logging node info for node master2 Apr 22 22:04:51.200: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 44531 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:47 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:04:47 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:04:51.201: INFO: Logging kubelet events for node master2 Apr 22 22:04:51.203: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:04:51.212: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.212: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:04:51.212: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:04:51.212: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:04:51.212: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:04:51.212: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.212: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:04:51.212: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.212: INFO: Container coredns ready: true, restart count 1 Apr 22 22:04:51.212: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.212: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:04:51.213: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.213: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:04:51.213: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.213: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:04:51.213: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.213: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:04:51.213: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.213: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.213: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:04:51.302: INFO: Latency metrics for node master2 Apr 22 22:04:51.302: INFO: Logging node info for node master3 Apr 22 22:04:51.305: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 44476 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:46 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:04:46 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:04:51.305: INFO: Logging kubelet events for node master3 Apr 22 22:04:51.307: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:04:51.316: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:04:51.316: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:04:51.316: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:04:51.316: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container coredns ready: true, restart count 1 Apr 22 22:04:51.316: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.316: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:04:51.316: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:04:51.316: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:04:51.316: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:04:51.316: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.316: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:04:51.403: INFO: Latency metrics for node master3 Apr 22 22:04:51.403: INFO: Logging node info for node node1 Apr 22 22:04:51.405: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 44304 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:41 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:41 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:41 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:04:41 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:04:51.406: INFO: Logging kubelet events for node node1 Apr 22 22:04:51.408: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:04:51.425: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.425: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:04:51.425: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:04:51.425: INFO: Container discover ready: false, restart count 0 Apr 22 22:04:51.426: INFO: Container init ready: false, restart count 0 Apr 22 22:04:51.426: INFO: Container install ready: false, restart count 0 Apr 22 22:04:51.426: INFO: sample-webhook-deployment-78988fc6cd-w9g8p started at 2022-04-22 22:04:48 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container sample-webhook ready: false, restart count 0 Apr 22 22:04:51.426: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:04:51.426: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:04:51.426: INFO: sample-webhook-deployment-78988fc6cd-jb2rs started at 2022-04-22 22:04:41 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container sample-webhook ready: true, restart count 0 Apr 22 22:04:51.426: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:04:51.426: INFO: forbid-27511080-d76jg started at 2022-04-22 22:00:00 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container c ready: true, restart count 0 Apr 22 22:04:51.426: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:51.426: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:04:51.426: INFO: pod-0f8f421b-2e9a-408d-96b5-78cae365c760 started at 2022-04-22 22:04:45 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container test-container ready: false, restart count 0 Apr 22 22:04:51.426: INFO: pod-service-account-mountsa-mountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:51.426: INFO: pod-service-account-nomountsa-nomountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:51.426: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:04:51.426: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:04:51.426: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:04:51.426: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:04:51.426: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container grafana ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:04:51.426: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:04:51.426: INFO: Container collectd ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:04:51.426: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:04:51.426: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:04:51.426: INFO: affinity-nodeport-7ns5q started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container affinity-nodeport ready: true, restart count 0 Apr 22 22:04:51.426: INFO: pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698 started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container secret-volume-test ready: false, restart count 0 Apr 22 22:04:51.426: INFO: pod-service-account-defaultsa-nomountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:51.426: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:04:51.426: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:04:51.426: INFO: execpod-affinityh7b5h started at 2022-04-22 22:04:03 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:51.426: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:04:52.369: INFO: Latency metrics for node node1 Apr 22 22:04:52.369: INFO: Logging node info for node node2 Apr 22 22:04:52.372: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 44597 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:48 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:48 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:04:48 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:04:48 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:04:52.374: INFO: Logging kubelet events for node node2 Apr 22 22:04:52.377: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:04:52.390: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:04:52.390: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:04:52.390: INFO: annotationupdate1a147010-3aae-4701-b547-ad9f9e47a8a8 started at 2022-04-22 22:04:36 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container client-container ready: true, restart count 0 Apr 22 22:04:52.390: INFO: busybox-26df426c-8183-43f6-aa25-d63576f35e7f started at 2022-04-22 22:02:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container busybox ready: true, restart count 0 Apr 22 22:04:52.390: INFO: pod-service-account-mountsa-nomountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.390: INFO: pod-service-account-defaultsa started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.390: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.390: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:04:52.391: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:52.391: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:04:52.391: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:04:52.391: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:04:52.391: INFO: affinity-nodeport-9r2t4 started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container affinity-nodeport ready: true, restart count 0 Apr 22 22:04:52.391: INFO: pod-service-account-mountsa started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.391: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:04:52.391: INFO: pod-service-account-nomountsa started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.391: INFO: pod-service-account-defaultsa-mountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.391: INFO: pod-service-account-nomountsa-mountspec started at 2022-04-22 22:04:47 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container token-test ready: false, restart count 0 Apr 22 22:04:52.391: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:04:52.391: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:04:52.391: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:04:52.391: INFO: Container collectd ready: true, restart count 0 Apr 22 22:04:52.391: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:04:52.391: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:04:52.391: INFO: bin-false8076e2a4-17b7-4d49-8bce-8b2208e9df0c started at (0+0 container statuses recorded) Apr 22 22:04:52.391: INFO: affinity-nodeport-ks7k5 started at 2022-04-22 22:03:54 +0000 UTC (0+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Container affinity-nodeport ready: true, restart count 0 Apr 22 22:04:52.391: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:04:52.391: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:04:52.391: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:04:52.391: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:04:52.391: INFO: Container discover ready: false, restart count 0 Apr 22 22:04:52.391: INFO: Container init ready: false, restart count 0 Apr 22 22:04:52.391: INFO: Container install ready: false, restart count 0 Apr 22 22:04:52.391: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:04:52.391: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:04:52.391: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:04:53.081: INFO: Latency metrics for node node2 Apr 22 22:04:53.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6129" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [152.223 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:35.588: Unexpected error: <*errors.errorString | 0xc005aa0420>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31691 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31691 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":438,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:47.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-2400a55d-8e11-4e06-a487-ef30cf8445ce STEP: Creating a pod to test consume secrets Apr 22 22:04:47.117: INFO: Waiting up to 5m0s for pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698" in namespace "secrets-2961" to be "Succeeded or Failed" Apr 22 22:04:47.119: INFO: Pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152814ms Apr 22 22:04:49.124: INFO: Pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006783717s Apr 22 22:04:51.128: INFO: Pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010450308s Apr 22 22:04:53.134: INFO: Pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017277712s STEP: Saw pod success Apr 22 22:04:53.134: INFO: Pod "pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698" satisfied condition "Succeeded or Failed" Apr 22 22:04:53.142: INFO: Trying to get logs from node node1 pod pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698 container secret-volume-test: STEP: delete the pod Apr 22 22:04:53.155: INFO: Waiting for pod pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698 to disappear Apr 22 22:04:53.157: INFO: Pod pod-secrets-e20774cf-3cf1-4696-af0b-2e66af1dd698 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:53.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2961" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":334,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:53.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:53.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4641" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:40.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:04:41.435: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:04:43.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261881, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261881, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261881, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261881, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:04:46.457: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:46.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6658-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:54.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9621" for this suite. STEP: Destroying namespace "webhook-9621-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":19,"skipped":317,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:54.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Apr 22 22:04:54.587: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3897 proxy --unix-socket=/tmp/kubectl-proxy-unix323172840/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:54.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3897" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":20,"skipped":333,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:53.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-d571a201-1e8a-4826-8124-cec42abac7df STEP: Creating a pod to test consume secrets Apr 22 22:04:53.165: INFO: Waiting up to 5m0s for pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978" in namespace "secrets-173" to be "Succeeded or Failed" Apr 22 22:04:53.167: INFO: Pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132152ms Apr 22 22:04:55.170: INFO: Pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005533542s Apr 22 22:04:57.173: INFO: Pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008348075s Apr 22 22:04:59.178: INFO: Pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012888957s STEP: Saw pod success Apr 22 22:04:59.178: INFO: Pod "pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978" satisfied condition "Succeeded or Failed" Apr 22 22:04:59.180: INFO: Trying to get logs from node node1 pod pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978 container secret-volume-test: STEP: delete the pod Apr 22 22:04:59.403: INFO: Waiting for pod pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978 to disappear Apr 22 22:04:59.405: INFO: Pod pod-secrets-9c3b51c9-b7e6-4c1b-b4f5-2e50f6dcb978 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:59.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-173" for this suite. • [SLOW TEST:6.295 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":449,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:47.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 22 22:04:48.352: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:04:48.364: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:04:50.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:52.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:54.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:04:56.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261888, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:04:59.383: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:04:59.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7039" for this suite. STEP: Destroying namespace "webhook-7039-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.545 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":35,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:54.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:04:54.712: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b" in namespace "security-context-test-8525" to be "Succeeded or Failed" Apr 22 22:04:54.714: INFO: Pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.995844ms Apr 22 22:04:56.718: INFO: Pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005823786s Apr 22 22:04:58.722: INFO: Pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010001822s Apr 22 22:05:00.727: INFO: Pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014658735s Apr 22 22:05:00.727: INFO: Pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b" satisfied condition "Succeeded or Failed" Apr 22 22:05:00.828: INFO: Got logs for pod "busybox-privileged-false-ab460284-921a-4d5a-8445-ce69d3bcc40b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:00.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8525" for this suite. • [SLOW TEST:6.157 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":338,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 21:59:15.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0422 21:59:15.855372 28 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:01.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9208" for this suite. • [SLOW TEST:346.052 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:51.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:03.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-670" for this suite. • [SLOW TEST:12.059 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":309,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:00.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Apr 22 22:05:00.893: INFO: The status of Pod pod-update-aa391b58-5b4e-48bf-b4cc-bd686953230f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:02.898: INFO: The status of Pod pod-update-aa391b58-5b4e-48bf-b4cc-bd686953230f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:04.897: INFO: The status of Pod pod-update-aa391b58-5b4e-48bf-b4cc-bd686953230f is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 22 22:05:05.412: INFO: Successfully updated pod "pod-update-aa391b58-5b4e-48bf-b4cc-bd686953230f" STEP: verifying the updated pod is in kubernetes Apr 22 22:05:05.416: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:05.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4206" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:03.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:05:03.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef" in namespace "downward-api-9860" to be "Succeeded or Failed" Apr 22 22:05:03.907: INFO: Pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131757ms Apr 22 22:05:05.910: INFO: Pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005050245s Apr 22 22:05:07.914: INFO: Pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009107488s Apr 22 22:05:09.918: INFO: Pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013671465s STEP: Saw pod success Apr 22 22:05:09.918: INFO: Pod "downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef" satisfied condition "Succeeded or Failed" Apr 22 22:05:09.921: INFO: Trying to get logs from node node2 pod downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef container client-container: STEP: delete the pod Apr 22 22:05:09.934: INFO: Waiting for pod downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef to disappear Apr 22 22:05:09.936: INFO: Pod downwardapi-volume-51758088-0c88-4781-8d5c-c8f08c9947ef no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:09.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9860" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":312,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:09.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 22 22:05:10.008: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 22 22:05:10.011: INFO: starting watch STEP: patching STEP: updating Apr 22 22:05:10.023: INFO: waiting for watch events with expected annotations Apr 22 22:05:10.023: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:10.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-2917" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":16,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":19,"skipped":335,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:53.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:10.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-963" for this suite. • [SLOW TEST:17.097 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":20,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:01.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:11.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2057" for this suite. • [SLOW TEST:9.503 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":11,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:59.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 22 22:05:08.093: INFO: Successfully updated pod "adopt-release-sshhg" STEP: Checking that the Job readopts the Pod Apr 22 22:05:08.093: INFO: Waiting up to 15m0s for pod "adopt-release-sshhg" in namespace "job-9" to be "adopted" Apr 22 22:05:08.096: INFO: Pod "adopt-release-sshhg": Phase="Running", Reason="", readiness=true. Elapsed: 2.344054ms Apr 22 22:05:10.101: INFO: Pod "adopt-release-sshhg": Phase="Running", Reason="", readiness=true. Elapsed: 2.00727419s Apr 22 22:05:10.101: INFO: Pod "adopt-release-sshhg" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 22 22:05:10.619: INFO: Successfully updated pod "adopt-release-sshhg" STEP: Checking that the Job releases the Pod Apr 22 22:05:10.619: INFO: Waiting up to 15m0s for pod "adopt-release-sshhg" in namespace "job-9" to be "released" Apr 22 22:05:10.622: INFO: Pod "adopt-release-sshhg": Phase="Running", Reason="", readiness=true. Elapsed: 2.502783ms Apr 22 22:05:12.625: INFO: Pod "adopt-release-sshhg": Phase="Running", Reason="", readiness=true. Elapsed: 2.005128641s Apr 22 22:05:12.625: INFO: Pod "adopt-release-sshhg" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:12.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9" for this suite. • [SLOW TEST:13.078 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":36,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:46.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Apr 22 22:04:46.903: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:13.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3259" for this suite. • [SLOW TEST:26.995 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":16,"skipped":345,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:12.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 22:05:15.731: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:15.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6010" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":416,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:10.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:21.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5685" for this suite. • [SLOW TEST:11.057 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":21,"skipped":371,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:21.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4fd926ac-35af-4f55-aad5-c767f52d04fb STEP: Creating a pod to test consume secrets Apr 22 22:05:21.457: INFO: Waiting up to 5m0s for pod "pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba" in namespace "secrets-248" to be "Succeeded or Failed" Apr 22 22:05:21.459: INFO: Pod "pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 1.987262ms Apr 22 22:05:23.462: INFO: Pod "pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005387616s Apr 22 22:05:25.467: INFO: Pod "pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010542717s STEP: Saw pod success Apr 22 22:05:25.467: INFO: Pod "pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba" satisfied condition "Succeeded or Failed" Apr 22 22:05:25.470: INFO: Trying to get logs from node node2 pod pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba container secret-volume-test: STEP: delete the pod Apr 22 22:05:25.484: INFO: Waiting for pod pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba to disappear Apr 22 22:05:25.486: INFO: Pod pod-secrets-50ab69ee-0f4c-4116-aab5-2a8fefbaf7ba no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:25.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-248" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:04:59.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-bw7j STEP: Creating a pod to test atomic-volume-subpath Apr 22 22:04:59.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bw7j" in namespace "subpath-8425" to be "Succeeded or Failed" Apr 22 22:04:59.482: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402017ms Apr 22 22:05:01.485: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389791s Apr 22 22:05:03.488: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009047803s Apr 22 22:05:05.492: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013122077s Apr 22 22:05:07.495: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 8.016108911s Apr 22 22:05:09.498: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 10.019050628s Apr 22 22:05:11.502: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 12.022678537s Apr 22 22:05:13.506: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 14.026698182s Apr 22 22:05:15.511: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 16.031384365s Apr 22 22:05:17.514: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 18.034611565s Apr 22 22:05:19.517: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 20.038104865s Apr 22 22:05:21.522: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 22.042380844s Apr 22 22:05:23.527: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Running", Reason="", readiness=true. Elapsed: 24.047787408s Apr 22 22:05:25.531: INFO: Pod "pod-subpath-test-projected-bw7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052302964s STEP: Saw pod success Apr 22 22:05:25.531: INFO: Pod "pod-subpath-test-projected-bw7j" satisfied condition "Succeeded or Failed" Apr 22 22:05:25.534: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-bw7j container test-container-subpath-projected-bw7j: STEP: delete the pod Apr 22 22:05:25.551: INFO: Waiting for pod pod-subpath-test-projected-bw7j to disappear Apr 22 22:05:25.553: INFO: Pod pod-subpath-test-projected-bw7j no longer exists STEP: Deleting pod pod-subpath-test-projected-bw7j Apr 22 22:05:25.553: INFO: Deleting pod "pod-subpath-test-projected-bw7j" in namespace "subpath-8425" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:25.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8425" for this suite. • [SLOW TEST:26.119 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:13.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 22 22:05:13.923: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:15.926: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:17.926: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 22 22:05:17.940: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:19.946: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:21.944: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Apr 22 22:05:21.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:05:21.953: INFO: Pod pod-with-prestop-exec-hook still exists Apr 22 22:05:23.953: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:05:23.958: INFO: Pod pod-with-prestop-exec-hook still exists Apr 22 22:05:25.955: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:05:25.958: INFO: Pod pod-with-prestop-exec-hook still exists Apr 22 22:05:27.955: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 22 22:05:27.958: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1747" for this suite. • [SLOW TEST:14.089 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":348,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:28.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 22 22:05:28.088: INFO: Waiting up to 5m0s for pod "downward-api-1750cfff-1812-481a-969d-6a11817ddd36" in namespace "downward-api-5847" to be "Succeeded or Failed" Apr 22 22:05:28.091: INFO: Pod "downward-api-1750cfff-1812-481a-969d-6a11817ddd36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109081ms Apr 22 22:05:30.095: INFO: Pod "downward-api-1750cfff-1812-481a-969d-6a11817ddd36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007190185s Apr 22 22:05:32.099: INFO: Pod "downward-api-1750cfff-1812-481a-969d-6a11817ddd36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010503634s STEP: Saw pod success Apr 22 22:05:32.099: INFO: Pod "downward-api-1750cfff-1812-481a-969d-6a11817ddd36" satisfied condition "Succeeded or Failed" Apr 22 22:05:32.101: INFO: Trying to get logs from node node2 pod downward-api-1750cfff-1812-481a-969d-6a11817ddd36 container dapi-container: STEP: delete the pod Apr 22 22:05:32.114: INFO: Waiting for pod downward-api-1750cfff-1812-481a-969d-6a11817ddd36 to disappear Apr 22 22:05:32.116: INFO: Pod downward-api-1750cfff-1812-481a-969d-6a11817ddd36 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:32.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5847" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":389,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":466,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:25.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 22 22:05:25.608: INFO: The status of Pod labelsupdateb81a9fb9-8b2f-4d2d-8e27-8cdd5c97312f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:27.611: INFO: The status of Pod labelsupdateb81a9fb9-8b2f-4d2d-8e27-8cdd5c97312f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:29.612: INFO: The status of Pod labelsupdateb81a9fb9-8b2f-4d2d-8e27-8cdd5c97312f is Running (Ready = true) Apr 22 22:05:30.129: INFO: Successfully updated pod "labelsupdateb81a9fb9-8b2f-4d2d-8e27-8cdd5c97312f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:32.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4280" for this suite. • [SLOW TEST:6.584 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":466,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:32.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:05:32.191: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c16d6a0d-0087-494e-b8c2-89f9bf6daaee" in namespace "security-context-test-2550" to be "Succeeded or Failed" Apr 22 22:05:32.193: INFO: Pod "busybox-user-65534-c16d6a0d-0087-494e-b8c2-89f9bf6daaee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084766ms Apr 22 22:05:34.198: INFO: Pod "busybox-user-65534-c16d6a0d-0087-494e-b8c2-89f9bf6daaee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006635168s Apr 22 22:05:36.202: INFO: Pod "busybox-user-65534-c16d6a0d-0087-494e-b8c2-89f9bf6daaee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010788514s Apr 22 22:05:36.202: INFO: Pod "busybox-user-65534-c16d6a0d-0087-494e-b8c2-89f9bf6daaee" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:36.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2550" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":407,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:10.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4752 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 22:05:10.110: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 22:05:10.143: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:12.147: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:14.147: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:16.147: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:18.147: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:20.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:22.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:24.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:26.147: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:28.147: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:30.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:32.146: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 22:05:32.150: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 22:05:36.173: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 22 22:05:36.173: INFO: Breadth first check of 10.244.3.234 on host 10.10.190.207... Apr 22 22:05:36.175: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.171:9080/dial?request=hostname&protocol=http&host=10.244.3.234&port=8080&tries=1'] Namespace:pod-network-test-4752 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:05:36.175: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:05:36.453: INFO: Waiting for responses: map[] Apr 22 22:05:36.453: INFO: reached 10.244.3.234 after 0/1 tries Apr 22 22:05:36.453: INFO: Breadth first check of 10.244.4.163 on host 10.10.190.208... Apr 22 22:05:36.456: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.171:9080/dial?request=hostname&protocol=http&host=10.244.4.163&port=8080&tries=1'] Namespace:pod-network-test-4752 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:05:36.456: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:05:36.557: INFO: Waiting for responses: map[] Apr 22 22:05:36.557: INFO: reached 10.244.4.163 after 0/1 tries Apr 22 22:05:36.557: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:36.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4752" for this suite. • [SLOW TEST:26.477 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":332,"failed":0} SS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:11.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:05:11.519: INFO: created pod Apr 22 22:05:11.519: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3700" to be "Succeeded or Failed" Apr 22 22:05:11.521: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173919ms Apr 22 22:05:13.523: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004605794s Apr 22 22:05:15.528: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009123782s STEP: Saw pod success Apr 22 22:05:15.528: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 22 22:05:45.528: INFO: polling logs Apr 22 22:05:45.536: INFO: Pod logs: 2022/04/22 22:05:13 OK: Got token 2022/04/22 22:05:13 validating with in-cluster discovery 2022/04/22 22:05:13 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/04/22 22:05:13 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3700:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650665711, NotBefore:1650665111, IssuedAt:1650665111, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3700", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"57ea504a-b581-470c-9f5d-79d47e5a3b75"}}} 2022/04/22 22:05:13 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/04/22 22:05:13 OK: Validated signature on JWT 2022/04/22 22:05:13 OK: Got valid claims from token! 2022/04/22 22:05:13 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3700:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650665711, NotBefore:1650665111, IssuedAt:1650665111, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3700", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"57ea504a-b581-470c-9f5d-79d47e5a3b75"}}} Apr 22 22:05:45.536: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:45.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3700" for this suite. • [SLOW TEST:34.063 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":12,"skipped":189,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:45.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:05:45.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255" in namespace "projected-9127" to be "Succeeded or Failed" Apr 22 22:05:45.596: INFO: Pod "downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255": Phase="Pending", Reason="", readiness=false. Elapsed: 5.002405ms Apr 22 22:05:47.599: INFO: Pod "downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008402147s Apr 22 22:05:49.602: INFO: Pod "downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011628365s STEP: Saw pod success Apr 22 22:05:49.602: INFO: Pod "downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255" satisfied condition "Succeeded or Failed" Apr 22 22:05:49.605: INFO: Trying to get logs from node node2 pod downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255 container client-container: STEP: delete the pod Apr 22 22:05:49.621: INFO: Waiting for pod downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255 to disappear Apr 22 22:05:49.623: INFO: Pod downwardapi-volume-f290ada3-4588-4a56-a9c7-9836e784c255 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:49.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9127" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":192,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:49.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 22:05:49.699: INFO: Waiting up to 5m0s for pod "security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc" in namespace "security-context-6246" to be "Succeeded or Failed" Apr 22 22:05:49.703: INFO: Pod "security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565872ms Apr 22 22:05:51.707: INFO: Pod "security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00768061s Apr 22 22:05:53.712: INFO: Pod "security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012255926s STEP: Saw pod success Apr 22 22:05:53.712: INFO: Pod "security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc" satisfied condition "Succeeded or Failed" Apr 22 22:05:53.714: INFO: Trying to get logs from node node1 pod security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc container test-container: STEP: delete the pod Apr 22 22:05:53.730: INFO: Waiting for pod security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc to disappear Apr 22 22:05:53.732: INFO: Pod security-context-8c581f52-bd83-4814-8f2b-e080113b4ffc no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:53.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6246" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":206,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:25.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9545 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 22:05:25.551: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 22:05:25.589: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:27.591: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:29.593: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:31.594: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:33.593: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:35.595: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:37.592: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:39.592: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:41.594: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:43.600: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:45.595: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 22:05:47.593: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 22:05:47.599: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 22:05:51.637: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 22 22:05:51.637: INFO: Going to poll 10.244.3.239 on port 8081 at least 0 times, with a maximum of 34 tries before failing Apr 22 22:05:51.639: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.239 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9545 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:05:51.639: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:05:52.739: INFO: Found all 1 expected endpoints: [netserver-0] Apr 22 22:05:52.739: INFO: Going to poll 10.244.4.169 on port 8081 at least 0 times, with a maximum of 34 tries before failing Apr 22 22:05:52.742: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.169 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9545 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:05:52.742: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:05:53.837: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:53.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9545" for this suite. • [SLOW TEST:28.318 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":391,"failed":0} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:53.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:55.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8808" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":24,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:53.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:05:53.791: INFO: The status of Pod busybox-readonly-fs0297f75a-3cdc-42ad-97f8-caae3cb69f07 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:55.794: INFO: The status of Pod busybox-readonly-fs0297f75a-3cdc-42ad-97f8-caae3cb69f07 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:57.795: INFO: The status of Pod busybox-readonly-fs0297f75a-3cdc-42ad-97f8-caae3cb69f07 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:05:57.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5605" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":209,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:56.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 22 22:05:56.055: INFO: Waiting up to 5m0s for pod "pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135" in namespace "emptydir-3215" to be "Succeeded or Failed" Apr 22 22:05:56.057: INFO: Pod "pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254828ms Apr 22 22:05:58.061: INFO: Pod "pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005934885s Apr 22 22:06:00.066: INFO: Pod "pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01161049s STEP: Saw pod success Apr 22 22:06:00.066: INFO: Pod "pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135" satisfied condition "Succeeded or Failed" Apr 22 22:06:00.069: INFO: Trying to get logs from node node2 pod pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135 container test-container: STEP: delete the pod Apr 22 22:06:00.081: INFO: Waiting for pod pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135 to disappear Apr 22 22:06:00.083: INFO: Pod pod-e9d7482d-2ecd-4cf0-9488-84071dcd0135 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:00.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3215" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":462,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:00.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 22 22:06:00.146: INFO: Waiting up to 5m0s for pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba" in namespace "emptydir-5880" to be "Succeeded or Failed" Apr 22 22:06:00.148: INFO: Pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006571ms Apr 22 22:06:02.153: INFO: Pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006384448s Apr 22 22:06:04.156: INFO: Pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010184538s Apr 22 22:06:06.162: INFO: Pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015449589s STEP: Saw pod success Apr 22 22:06:06.162: INFO: Pod "pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba" satisfied condition "Succeeded or Failed" Apr 22 22:06:06.164: INFO: Trying to get logs from node node1 pod pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba container test-container: STEP: delete the pod Apr 22 22:06:06.175: INFO: Waiting for pod pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba to disappear Apr 22 22:06:06.178: INFO: Pod pod-e8ac5803-67a4-4c21-a7a5-ccfe4234f7ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:06.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5880" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:57.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 22 22:05:57.864: INFO: The status of Pod annotationupdatea59187cf-f19b-40f7-9c11-4fcf7cacbd4f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:59.869: INFO: The status of Pod annotationupdatea59187cf-f19b-40f7-9c11-4fcf7cacbd4f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:01.868: INFO: The status of Pod annotationupdatea59187cf-f19b-40f7-9c11-4fcf7cacbd4f is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:03.869: INFO: The status of Pod annotationupdatea59187cf-f19b-40f7-9c11-4fcf7cacbd4f is Running (Ready = true) Apr 22 22:06:04.390: INFO: Successfully updated pod "annotationupdatea59187cf-f19b-40f7-9c11-4fcf7cacbd4f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3643" for this suite. • [SLOW TEST:8.581 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":218,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:05.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:05:05.501: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:06.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5184" for this suite. • [SLOW TEST:61.318 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":23,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:06.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Apr 22 22:06:06.875: INFO: Waiting up to 5m0s for pod "client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d" in namespace "containers-5278" to be "Succeeded or Failed" Apr 22 22:06:06.877: INFO: Pod "client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04096ms Apr 22 22:06:08.881: INFO: Pod "client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00593883s Apr 22 22:06:10.885: INFO: Pod "client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010206085s STEP: Saw pod success Apr 22 22:06:10.885: INFO: Pod "client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d" satisfied condition "Succeeded or Failed" Apr 22 22:06:10.888: INFO: Trying to get logs from node node2 pod client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d container agnhost-container: STEP: delete the pod Apr 22 22:06:10.905: INFO: Waiting for pod client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d to disappear Apr 22 22:06:10.907: INFO: Pod client-containers-12543d31-91ba-42e4-9f71-0d8187142d0d no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5278" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":404,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:06.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-2a14f6df-f929-4fcd-ad78-b6d4b8be8c61 STEP: Creating a pod to test consume secrets Apr 22 22:06:06.276: INFO: Waiting up to 5m0s for pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a" in namespace "secrets-2925" to be "Succeeded or Failed" Apr 22 22:06:06.278: INFO: Pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199217ms Apr 22 22:06:08.281: INFO: Pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00521034s Apr 22 22:06:10.286: INFO: Pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010813752s Apr 22 22:06:12.290: INFO: Pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013971465s STEP: Saw pod success Apr 22 22:06:12.290: INFO: Pod "pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a" satisfied condition "Succeeded or Failed" Apr 22 22:06:12.293: INFO: Trying to get logs from node node1 pod pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a container secret-volume-test: STEP: delete the pod Apr 22 22:06:12.308: INFO: Waiting for pod pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a to disappear Apr 22 22:06:12.309: INFO: Pod pod-secrets-c5beb3df-a635-4baf-9277-d91d0d77ad6a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:12.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2925" for this suite. • [SLOW TEST:6.097 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":496,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:06.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-9c943835-283f-46e3-95c9-f57d5934eab7 STEP: Creating a pod to test consume secrets Apr 22 22:06:06.494: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9" in namespace "projected-7493" to be "Succeeded or Failed" Apr 22 22:06:06.496: INFO: Pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219247ms Apr 22 22:06:08.499: INFO: Pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005680069s Apr 22 22:06:10.503: INFO: Pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009471292s Apr 22 22:06:12.506: INFO: Pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012323103s STEP: Saw pod success Apr 22 22:06:12.506: INFO: Pod "pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9" satisfied condition "Succeeded or Failed" Apr 22 22:06:12.508: INFO: Trying to get logs from node node1 pod pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9 container projected-secret-volume-test: STEP: delete the pod Apr 22 22:06:12.520: INFO: Waiting for pod pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9 to disappear Apr 22 22:06:12.522: INFO: Pod pod-projected-secrets-d27728d1-afc8-4bfb-b192-d3c66b0d70d9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:12.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7493" for this suite. • [SLOW TEST:6.096 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:32.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-91 Apr 22 22:05:32.198: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:05:34.202: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 22 22:05:34.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 22 22:05:34.463: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 22 22:05:34.463: INFO: stdout: "iptables" Apr 22 22:05:34.463: INFO: proxyMode: iptables Apr 22 22:05:34.472: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 22 22:05:34.474: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-91 STEP: creating replication controller affinity-clusterip-timeout in namespace services-91 I0422 22:05:34.486584 36 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-91, replica count: 3 I0422 22:05:37.538041 36 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:05:40.538757 36 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:05:40.544: INFO: Creating new exec pod Apr 22 22:05:45.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec execpod-affinityfszwx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Apr 22 22:05:45.804: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-timeout 80\n+ echo hostName\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Apr 22 22:05:45.804: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:05:45.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec execpod-affinityfszwx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.241 80' Apr 22 22:05:46.085: INFO: stderr: "+ nc -v -t -w 2 10.233.16.241 80\n+ echo hostName\nConnection to 10.233.16.241 80 port [tcp/http] succeeded!\n" Apr 22 22:05:46.085: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:05:46.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec execpod-affinityfszwx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.16.241:80/ ; done' Apr 22 22:05:46.425: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n" Apr 22 22:05:46.425: INFO: stdout: "\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h\naffinity-clusterip-timeout-cdm4h" Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Received response from host: affinity-clusterip-timeout-cdm4h Apr 22 22:05:46.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec execpod-affinityfszwx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.16.241:80/' Apr 22 22:05:47.020: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n" Apr 22 22:05:47.020: INFO: stdout: "affinity-clusterip-timeout-cdm4h" Apr 22 22:06:07.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-91 exec execpod-affinityfszwx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.16.241:80/' Apr 22 22:06:07.293: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.16.241:80/\n" Apr 22 22:06:07.293: INFO: stdout: "affinity-clusterip-timeout-f26pm" Apr 22 22:06:07.293: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-91, will wait for the garbage collector to delete the pods Apr 22 22:06:07.361: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.942985ms Apr 22 22:06:07.462: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.041241ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:14.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-91" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:42.417 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":474,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:10.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:06:11.582: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:06:13.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261971, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261971, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261971, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261971, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:06:16.603: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:17.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5136" for this suite. STEP: Destroying namespace "webhook-5136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.759 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":25,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:12.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 22 22:06:12.713: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:06:12.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:06:14.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261972, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261972, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261972, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261972, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:06:17.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:17.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3" for this suite. STEP: Destroying namespace "webhook-3-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":28,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:14.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 22 22:06:14.643: INFO: Waiting up to 5m0s for pod "downward-api-022260e2-050b-4088-8383-d25777f7d9e8" in namespace "downward-api-3130" to be "Succeeded or Failed" Apr 22 22:06:14.645: INFO: Pod "downward-api-022260e2-050b-4088-8383-d25777f7d9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151208ms Apr 22 22:06:16.648: INFO: Pod "downward-api-022260e2-050b-4088-8383-d25777f7d9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005298732s Apr 22 22:06:18.652: INFO: Pod "downward-api-022260e2-050b-4088-8383-d25777f7d9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009079125s STEP: Saw pod success Apr 22 22:06:18.652: INFO: Pod "downward-api-022260e2-050b-4088-8383-d25777f7d9e8" satisfied condition "Succeeded or Failed" Apr 22 22:06:18.654: INFO: Trying to get logs from node node2 pod downward-api-022260e2-050b-4088-8383-d25777f7d9e8 container dapi-container: STEP: delete the pod Apr 22 22:06:18.775: INFO: Waiting for pod downward-api-022260e2-050b-4088-8383-d25777f7d9e8 to disappear Apr 22 22:06:18.777: INFO: Pod downward-api-022260e2-050b-4088-8383-d25777f7d9e8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:18.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3130" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":490,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:18.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Apr 22 22:06:18.857: INFO: Waiting up to 5m0s for pod "var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657" in namespace "var-expansion-8182" to be "Succeeded or Failed" Apr 22 22:06:18.859: INFO: Pod "var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013329ms Apr 22 22:06:20.862: INFO: Pod "var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004982455s Apr 22 22:06:22.866: INFO: Pod "var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008556944s STEP: Saw pod success Apr 22 22:06:22.866: INFO: Pod "var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657" satisfied condition "Succeeded or Failed" Apr 22 22:06:22.869: INFO: Trying to get logs from node node1 pod var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657 container dapi-container: STEP: delete the pod Apr 22 22:06:22.880: INFO: Waiting for pod var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657 to disappear Apr 22 22:06:22.882: INFO: Pod var-expansion-7d4aa1b7-030e-481d-a707-664aa85a8657 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:22.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8182" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":513,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:12.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:23.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9705" for this suite. • [SLOW TEST:11.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":18,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:17.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:06:17.756: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:23.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5540" for this suite. • [SLOW TEST:6.055 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:17.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:06:18.355: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:06:20.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261978, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261978, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261978, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261978, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:06:23.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 22 22:06:29.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-589 attach --namespace=webhook-589 to-be-attached-pod -i -c=container1' Apr 22 22:06:29.591: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:29.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-589" for this suite. STEP: Destroying namespace "webhook-589-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.712 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":29,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:22.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:06:22.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a" in namespace "downward-api-1356" to be "Succeeded or Failed" Apr 22 22:06:22.937: INFO: Pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.90648ms Apr 22 22:06:24.941: INFO: Pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005985492s Apr 22 22:06:26.944: INFO: Pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00907793s Apr 22 22:06:28.947: INFO: Pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012589858s STEP: Saw pod success Apr 22 22:06:28.947: INFO: Pod "downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a" satisfied condition "Succeeded or Failed" Apr 22 22:06:28.950: INFO: Trying to get logs from node node2 pod downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a container client-container: STEP: delete the pod Apr 22 22:06:29.796: INFO: Waiting for pod downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a to disappear Apr 22 22:06:29.799: INFO: Pod downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:29.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1356" for this suite. • [SLOW TEST:6.903 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":520,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:03:54.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-875 STEP: creating service affinity-nodeport in namespace services-875 STEP: creating replication controller affinity-nodeport in namespace services-875 I0422 22:03:54.617812 24 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-875, replica count: 3 I0422 22:03:57.669734 24 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:04:00.672322 24 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:04:03.672731 24 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:04:03.681: INFO: Creating new exec pod Apr 22 22:04:10.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Apr 22 22:04:11.218: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Apr 22 22:04:11.218: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:04:11.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.10.52 80' Apr 22 22:04:11.525: INFO: stderr: "+ nc -v -t -w 2 10.233.10.52 80\n+ echo hostName\nConnection to 10.233.10.52 80 port [tcp/http] succeeded!\n" Apr 22 22:04:11.525: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:04:11.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:12.137: INFO: rc: 1 Apr 22 22:04:12.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:13.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:13.382: INFO: rc: 1 Apr 22 22:04:13.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:14.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:14.509: INFO: rc: 1 Apr 22 22:04:14.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:15.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:15.475: INFO: rc: 1 Apr 22 22:04:15.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:16.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:16.607: INFO: rc: 1 Apr 22 22:04:16.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:17.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:17.676: INFO: rc: 1 Apr 22 22:04:17.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:18.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:18.523: INFO: rc: 1 Apr 22 22:04:18.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:19.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:19.639: INFO: rc: 1 Apr 22 22:04:19.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:20.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:20.440: INFO: rc: 1 Apr 22 22:04:20.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:21.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:21.645: INFO: rc: 1 Apr 22 22:04:21.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:22.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:23.076: INFO: rc: 1 Apr 22 22:04:23.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:23.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:23.392: INFO: rc: 1 Apr 22 22:04:23.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:24.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:24.950: INFO: rc: 1 Apr 22 22:04:24.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:25.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:26.454: INFO: rc: 1 Apr 22 22:04:26.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:27.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:27.404: INFO: rc: 1 Apr 22 22:04:27.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:28.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:28.442: INFO: rc: 1 Apr 22 22:04:28.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:29.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:29.564: INFO: rc: 1 Apr 22 22:04:29.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:30.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:30.712: INFO: rc: 1 Apr 22 22:04:30.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:31.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:31.439: INFO: rc: 1 Apr 22 22:04:31.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:32.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:32.600: INFO: rc: 1 Apr 22 22:04:32.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:33.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:33.498: INFO: rc: 1 Apr 22 22:04:33.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:34.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:34.402: INFO: rc: 1 Apr 22 22:04:34.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:35.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:35.372: INFO: rc: 1 Apr 22 22:04:35.372: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:36.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:36.373: INFO: rc: 1 Apr 22 22:04:36.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:37.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:37.407: INFO: rc: 1 Apr 22 22:04:37.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:38.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:38.464: INFO: rc: 1 Apr 22 22:04:38.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:39.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:39.570: INFO: rc: 1 Apr 22 22:04:39.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:40.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:40.377: INFO: rc: 1 Apr 22 22:04:40.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:41.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:41.360: INFO: rc: 1 Apr 22 22:04:41.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:42.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:42.869: INFO: rc: 1 Apr 22 22:04:42.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:43.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:43.410: INFO: rc: 1 Apr 22 22:04:43.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:44.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:44.397: INFO: rc: 1 Apr 22 22:04:44.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:45.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:45.394: INFO: rc: 1 Apr 22 22:04:45.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:46.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:46.433: INFO: rc: 1 Apr 22 22:04:46.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:47.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:47.402: INFO: rc: 1 Apr 22 22:04:47.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:48.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:49.146: INFO: rc: 1 Apr 22 22:04:49.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:50.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:50.985: INFO: rc: 1 Apr 22 22:04:50.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:51.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:51.598: INFO: rc: 1 Apr 22 22:04:51.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:52.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:52.716: INFO: rc: 1 Apr 22 22:04:52.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:53.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:53.371: INFO: rc: 1 Apr 22 22:04:53.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:54.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:55.220: INFO: rc: 1 Apr 22 22:04:55.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:56.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:56.497: INFO: rc: 1 Apr 22 22:04:56.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:57.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:57.417: INFO: rc: 1 Apr 22 22:04:57.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:58.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:58.594: INFO: rc: 1 Apr 22 22:04:58.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:04:59.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:04:59.485: INFO: rc: 1 Apr 22 22:04:59.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:00.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:00.832: INFO: rc: 1 Apr 22 22:05:00.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:01.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:01.385: INFO: rc: 1 Apr 22 22:05:01.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:02.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:02.441: INFO: rc: 1 Apr 22 22:05:02.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:03.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:03.385: INFO: rc: 1 Apr 22 22:05:03.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:04.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:04.405: INFO: rc: 1 Apr 22 22:05:04.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:05.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:05.387: INFO: rc: 1 Apr 22 22:05:05.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:06.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:06.388: INFO: rc: 1 Apr 22 22:05:06.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:07.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:07.372: INFO: rc: 1 Apr 22 22:05:07.372: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:08.390: INFO: rc: 1 Apr 22 22:05:08.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:09.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:09.373: INFO: rc: 1 Apr 22 22:05:09.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:10.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:10.379: INFO: rc: 1 Apr 22 22:05:10.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:11.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:11.503: INFO: rc: 1 Apr 22 22:05:11.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:12.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:12.448: INFO: rc: 1 Apr 22 22:05:12.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:13.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:13.381: INFO: rc: 1 Apr 22 22:05:13.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:14.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:14.396: INFO: rc: 1 Apr 22 22:05:14.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:15.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:15.390: INFO: rc: 1 Apr 22 22:05:15.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:16.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:16.542: INFO: rc: 1 Apr 22 22:05:16.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:17.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:17.577: INFO: rc: 1 Apr 22 22:05:17.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:18.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:18.368: INFO: rc: 1 Apr 22 22:05:18.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:19.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:19.455: INFO: rc: 1 Apr 22 22:05:19.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:20.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:20.394: INFO: rc: 1 Apr 22 22:05:20.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:21.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:21.418: INFO: rc: 1 Apr 22 22:05:21.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:22.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:22.376: INFO: rc: 1 Apr 22 22:05:22.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:23.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:23.375: INFO: rc: 1 Apr 22 22:05:23.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:24.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:24.376: INFO: rc: 1 Apr 22 22:05:24.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:25.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:25.400: INFO: rc: 1 Apr 22 22:05:25.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:26.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:26.555: INFO: rc: 1 Apr 22 22:05:26.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:27.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:27.684: INFO: rc: 1 Apr 22 22:05:27.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:28.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:28.400: INFO: rc: 1 Apr 22 22:05:28.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:29.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:29.447: INFO: rc: 1 Apr 22 22:05:29.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:30.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:30.416: INFO: rc: 1 Apr 22 22:05:30.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:31.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:31.431: INFO: rc: 1 Apr 22 22:05:31.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:32.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:32.383: INFO: rc: 1 Apr 22 22:05:32.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:33.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:33.405: INFO: rc: 1 Apr 22 22:05:33.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:34.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:34.403: INFO: rc: 1 Apr 22 22:05:34.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:35.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:35.525: INFO: rc: 1 Apr 22 22:05:35.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:36.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:36.440: INFO: rc: 1 Apr 22 22:05:36.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:37.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:37.678: INFO: rc: 1 Apr 22 22:05:37.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:38.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:38.393: INFO: rc: 1 Apr 22 22:05:38.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:39.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:39.387: INFO: rc: 1 Apr 22 22:05:39.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:40.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:40.376: INFO: rc: 1 Apr 22 22:05:40.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:41.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:41.419: INFO: rc: 1 Apr 22 22:05:41.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:42.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:42.455: INFO: rc: 1 Apr 22 22:05:42.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:43.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:43.393: INFO: rc: 1 Apr 22 22:05:43.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:44.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:44.395: INFO: rc: 1 Apr 22 22:05:44.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:45.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:45.403: INFO: rc: 1 Apr 22 22:05:45.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:46.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:46.385: INFO: rc: 1 Apr 22 22:05:46.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:47.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:47.378: INFO: rc: 1 Apr 22 22:05:47.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:48.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:48.463: INFO: rc: 1 Apr 22 22:05:48.463: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:49.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:49.401: INFO: rc: 1 Apr 22 22:05:49.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:50.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:50.412: INFO: rc: 1 Apr 22 22:05:50.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:51.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:51.394: INFO: rc: 1 Apr 22 22:05:51.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:52.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:52.452: INFO: rc: 1 Apr 22 22:05:52.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:53.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:53.382: INFO: rc: 1 Apr 22 22:05:53.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:54.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:54.401: INFO: rc: 1 Apr 22 22:05:54.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:55.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:55.409: INFO: rc: 1 Apr 22 22:05:55.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:56.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:56.388: INFO: rc: 1 Apr 22 22:05:56.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:57.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:57.381: INFO: rc: 1 Apr 22 22:05:57.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:58.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:58.382: INFO: rc: 1 Apr 22 22:05:58.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:05:59.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:05:59.444: INFO: rc: 1 Apr 22 22:05:59.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:00.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:00.493: INFO: rc: 1 Apr 22 22:06:00.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:01.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:01.480: INFO: rc: 1 Apr 22 22:06:01.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:02.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:02.429: INFO: rc: 1 Apr 22 22:06:02.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:03.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:04.234: INFO: rc: 1 Apr 22 22:06:04.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:05.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:05.786: INFO: rc: 1 Apr 22 22:06:05.787: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:06.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:06.379: INFO: rc: 1 Apr 22 22:06:06.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:07.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:07.460: INFO: rc: 1 Apr 22 22:06:07.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:08.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:08.429: INFO: rc: 1 Apr 22 22:06:08.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:09.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:09.392: INFO: rc: 1 Apr 22 22:06:09.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:10.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:10.511: INFO: rc: 1 Apr 22 22:06:10.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:11.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:11.585: INFO: rc: 1 Apr 22 22:06:11.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:12.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:12.512: INFO: rc: 1 Apr 22 22:06:12.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:12.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195' Apr 22 22:06:12.729: INFO: rc: 1 Apr 22 22:06:12.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-875 exec execpod-affinityh7b5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31195: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31195 nc: connect to 10.10.190.207 port 31195 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:12.730: FAIL: Unexpected error: <*errors.errorString | 0xc004cb2140>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001d15a20, 0x77b33d8, 0xc00534e6e0, 0xc0011c6f00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001587b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001587b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001587b00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 22 22:06:12.731: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-875, will wait for the garbage collector to delete the pods Apr 22 22:06:12.795: INFO: Deleting ReplicationController affinity-nodeport took: 4.537424ms Apr 22 22:06:12.896: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.049661ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-875". STEP: Found 28 events. Apr 22 22:06:28.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-7ns5q: { } Scheduled: Successfully assigned services-875/affinity-nodeport-7ns5q to node1 Apr 22 22:06:28.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-9r2t4: { } Scheduled: Successfully assigned services-875/affinity-nodeport-9r2t4 to node2 Apr 22 22:06:28.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-ks7k5: { } Scheduled: Successfully assigned services-875/affinity-nodeport-ks7k5 to node2 Apr 22 22:06:28.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityh7b5h: { } Scheduled: Successfully assigned services-875/execpod-affinityh7b5h to node1 Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-9r2t4 Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-7ns5q Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-ks7k5 Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-7ns5q: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-7ns5q: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 317.754392ms Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-7ns5q: {kubelet node1} Created: Created container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-7ns5q: {kubelet node1} Started: Started container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-ks7k5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:56 +0000 UTC - event for affinity-nodeport-ks7k5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 379.549857ms Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:57 +0000 UTC - event for affinity-nodeport-9r2t4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:57 +0000 UTC - event for affinity-nodeport-ks7k5: {kubelet node2} Created: Created container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:57 +0000 UTC - event for affinity-nodeport-ks7k5: {kubelet node2} Started: Started container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:58 +0000 UTC - event for affinity-nodeport-9r2t4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 373.386643ms Apr 22 22:06:28.011: INFO: At 2022-04-22 22:03:58 +0000 UTC - event for affinity-nodeport-9r2t4: {kubelet node2} Created: Created container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:04:01 +0000 UTC - event for affinity-nodeport-9r2t4: {kubelet node2} Started: Started container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:04:05 +0000 UTC - event for execpod-affinityh7b5h: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:06:28.011: INFO: At 2022-04-22 22:04:06 +0000 UTC - event for execpod-affinityh7b5h: {kubelet node1} Created: Created container agnhost-container Apr 22 22:06:28.011: INFO: At 2022-04-22 22:04:06 +0000 UTC - event for execpod-affinityh7b5h: {kubelet node1} Started: Started container agnhost-container Apr 22 22:06:28.011: INFO: At 2022-04-22 22:04:06 +0000 UTC - event for execpod-affinityh7b5h: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 276.257569ms Apr 22 22:06:28.011: INFO: At 2022-04-22 22:06:12 +0000 UTC - event for affinity-nodeport: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-875/affinity-nodeport: Operation cannot be fulfilled on endpoints "affinity-nodeport": the object has been modified; please apply your changes to the latest version and try again Apr 22 22:06:28.011: INFO: At 2022-04-22 22:06:12 +0000 UTC - event for affinity-nodeport-7ns5q: {kubelet node1} Killing: Stopping container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:06:12 +0000 UTC - event for affinity-nodeport-9r2t4: {kubelet node2} Killing: Stopping container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:06:12 +0000 UTC - event for affinity-nodeport-ks7k5: {kubelet node2} Killing: Stopping container affinity-nodeport Apr 22 22:06:28.011: INFO: At 2022-04-22 22:06:12 +0000 UTC - event for execpod-affinityh7b5h: {kubelet node1} Killing: Stopping container agnhost-container Apr 22 22:06:28.013: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:06:28.013: INFO: Apr 22 22:06:28.017: INFO: Logging node info for node master1 Apr 22 22:06:28.020: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 46922 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:20 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:20 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:20 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:06:20 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:06:28.020: INFO: Logging kubelet events for node master1 Apr 22 22:06:28.022: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:06:28.047: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.047: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:06:28.047: INFO: Container nginx ready: true, restart count 0 Apr 22 22:06:28.048: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.048: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:06:28.048: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:06:28.048: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:06:28.048: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:06:28.048: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:06:28.048: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:06:28.048: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:06:28.048: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:06:28.048: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:06:28.048: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.048: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.048: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:06:28.142: INFO: Latency metrics for node master1 Apr 22 22:06:28.142: INFO: Logging node info for node master2 Apr 22 22:06:28.145: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 47132 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:06:28.145: INFO: Logging kubelet events for node master2 Apr 22 22:06:28.148: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:06:28.163: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:06:28.163: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.163: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:06:28.163: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:06:28.163: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:06:28.163: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:06:28.163: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container coredns ready: true, restart count 1 Apr 22 22:06:28.163: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:06:28.163: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:06:28.163: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:06:28.163: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.163: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:06:28.248: INFO: Latency metrics for node master2 Apr 22 22:06:28.248: INFO: Logging node info for node master3 Apr 22 22:06:28.251: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 47117 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:06:27 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:06:28.252: INFO: Logging kubelet events for node master3 Apr 22 22:06:28.254: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:06:28.268: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:06:28.268: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:06:28.268: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:06:28.268: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:06:28.268: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container coredns ready: true, restart count 1 Apr 22 22:06:28.268: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.268: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:06:28.268: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:06:28.268: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:06:28.268: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.268: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:06:28.347: INFO: Latency metrics for node master3 Apr 22 22:06:28.347: INFO: Logging node info for node node1 Apr 22 22:06:28.351: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 46954 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:22 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:22 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:22 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:06:22 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:06:28.352: INFO: Logging kubelet events for node node1 Apr 22 22:06:28.355: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:06:28.371: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:06:28.371: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:06:28.371: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:06:28.371: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:06:28.371: INFO: nodeport-test-7dkll started at 2022-04-22 22:06:23 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container nodeport-test ready: true, restart count 0 Apr 22 22:06:28.371: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:06:28.371: INFO: Container discover ready: false, restart count 0 Apr 22 22:06:28.371: INFO: Container init ready: false, restart count 0 Apr 22 22:06:28.371: INFO: Container install ready: false, restart count 0 Apr 22 22:06:28.371: INFO: ss-1 started at 2022-04-22 22:06:07 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container webserver ready: false, restart count 0 Apr 22 22:06:28.371: INFO: sample-webhook-deployment-78988fc6cd-5pljm started at 2022-04-22 22:06:18 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container sample-webhook ready: true, restart count 0 Apr 22 22:06:28.371: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:06:28.371: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:06:28.371: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:06:28.371: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:28.371: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:06:28.371: INFO: var-expansion-cc34bfec-1db7-469d-874c-13b970094dde started at 2022-04-22 22:05:15 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container dapi-container ready: false, restart count 0 Apr 22 22:06:28.371: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:06:28.371: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:06:28.371: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container grafana ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:06:28.371: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:06:28.371: INFO: Container collectd ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:06:28.371: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:06:28.371: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:06:28.371: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:28.371: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:06:28.565: INFO: Latency metrics for node node1 Apr 22 22:06:28.565: INFO: Logging node info for node node2 Apr 22 22:06:28.568: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 46915 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:19 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:19 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:06:19 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:06:19 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:06:28.569: INFO: Logging kubelet events for node node2 Apr 22 22:06:28.572: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:06:29.665: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:06:29.665: INFO: update-demo-nautilus-sf2fb started at 2022-04-22 22:06:24 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Container update-demo ready: false, restart count 0 Apr 22 22:06:29.665: INFO: ss-2 started at 2022-04-22 22:06:12 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Container webserver ready: false, restart count 0 Apr 22 22:06:29.665: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:06:29.665: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:06:29.665: INFO: Container collectd ready: true, restart count 0 Apr 22 22:06:29.665: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:06:29.665: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:06:29.665: INFO: update-demo-nautilus-q9k22 started at 2022-04-22 22:06:24 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Container update-demo ready: false, restart count 0 Apr 22 22:06:29.665: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:06:29.665: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:06:29.666: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:06:29.666: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:06:29.666: INFO: Container discover ready: false, restart count 0 Apr 22 22:06:29.666: INFO: Container init ready: false, restart count 0 Apr 22 22:06:29.666: INFO: Container install ready: false, restart count 0 Apr 22 22:06:29.666: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:29.666: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:06:29.666: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:06:29.666: INFO: busybox-26df426c-8183-43f6-aa25-d63576f35e7f started at 2022-04-22 22:02:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container busybox ready: true, restart count 0 Apr 22 22:06:29.666: INFO: ss-0 started at 2022-04-22 22:05:36 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container webserver ready: false, restart count 0 Apr 22 22:06:29.666: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:06:29.666: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:06:29.666: INFO: downwardapi-volume-0a2c6dbc-73c1-409a-bf23-fb70099f5a3a started at 2022-04-22 22:06:22 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container client-container ready: false, restart count 0 Apr 22 22:06:29.666: INFO: to-be-attached-pod started at 2022-04-22 22:06:23 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container container1 ready: true, restart count 0 Apr 22 22:06:29.666: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:06:29.666: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:06:29.666: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:06:29.666: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:06:29.666: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:06:29.666: INFO: busybox-readonly-fs0297f75a-3cdc-42ad-97f8-caae3cb69f07 started at 2022-04-22 22:05:53 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container busybox-readonly-fs0297f75a-3cdc-42ad-97f8-caae3cb69f07 ready: true, restart count 0 Apr 22 22:06:29.666: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:06:29.666: INFO: concurrent-27511086-c7775 started at 2022-04-22 22:06:00 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container c ready: true, restart count 0 Apr 22 22:06:29.666: INFO: nodeport-test-px4rq started at 2022-04-22 22:06:23 +0000 UTC (0+1 container statuses recorded) Apr 22 22:06:29.666: INFO: Container nodeport-test ready: false, restart count 0 Apr 22 22:06:30.170: INFO: Latency metrics for node node2 Apr 22 22:06:30.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-875" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [155.594 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:06:12.730: Unexpected error: <*errors.errorString | 0xc004cb2140>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31195 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":571,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:27.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-26df426c-8183-43f6-aa25-d63576f35e7f in namespace container-probe-5401 Apr 22 22:02:33.916: INFO: Started pod busybox-26df426c-8183-43f6-aa25-d63576f35e7f in namespace container-probe-5401 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 22:02:33.919: INFO: Initial restart count of pod busybox-26df426c-8183-43f6-aa25-d63576f35e7f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:34.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5401" for this suite. • [SLOW TEST:246.675 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":494,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Apr 22 22:06:32.242: INFO: running pods: 0 < 1 Apr 22 22:06:34.247: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:36.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-7339" for this suite. • [SLOW TEST:6.083 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":31,"skipped":574,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:29.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-7128 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7128 to expose endpoints map[] Apr 22 22:06:29.870: INFO: successfully validated that service multi-endpoint-test in namespace services-7128 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7128 Apr 22 22:06:29.886: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:31.890: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:33.890: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7128 to expose endpoints map[pod1:[100]] Apr 22 22:06:33.902: INFO: successfully validated that service multi-endpoint-test in namespace services-7128 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7128 Apr 22 22:06:33.915: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:35.919: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:37.919: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7128 to expose endpoints map[pod1:[100] pod2:[101]] Apr 22 22:06:37.931: INFO: successfully validated that service multi-endpoint-test in namespace services-7128 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7128 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7128 to expose endpoints map[pod2:[101]] Apr 22 22:06:37.946: INFO: successfully validated that service multi-endpoint-test in namespace services-7128 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7128 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7128 to expose endpoints map[] Apr 22 22:06:37.959: INFO: successfully validated that service multi-endpoint-test in namespace services-7128 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:37.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7128" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:8.139 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":25,"skipped":537,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:37.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:38.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1328" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":26,"skipped":544,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:34.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Apr 22 22:06:34.608: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:36.611: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:38.612: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:06:40.612: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 22 22:06:41.626: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:42.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2825" for this suite. • [SLOW TEST:8.084 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":26,"skipped":497,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:42.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8270.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8270.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8270.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8270.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:06:48.739: INFO: DNS probes using dns-8270/dns-test-699b49d6-bfa4-4a9d-bcf0-f72a74c81040 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:48.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8270" for this suite. • [SLOW TEST:6.095 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":505,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:48.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Apr 22 22:06:48.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5956 api-versions' Apr 22 22:06:48.916: INFO: stderr: "" Apr 22 22:06:48.916: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\nwebhook.example.com/v1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:48.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5956" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":28,"skipped":512,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:36.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:06:36.767: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:06:38.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261996, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261996, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261996, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786261996, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:06:41.789: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:06:41.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:49.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-375" for this suite. STEP: Destroying namespace "webhook-375-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.590 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":32,"skipped":587,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:49.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:06:49.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba" in namespace "projected-981" to be "Succeeded or Failed" Apr 22 22:06:49.956: INFO: Pod "downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134349ms Apr 22 22:06:51.961: INFO: Pod "downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006686174s Apr 22 22:06:53.964: INFO: Pod "downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010275894s STEP: Saw pod success Apr 22 22:06:53.964: INFO: Pod "downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba" satisfied condition "Succeeded or Failed" Apr 22 22:06:53.966: INFO: Trying to get logs from node node2 pod downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba container client-container: STEP: delete the pod Apr 22 22:06:53.995: INFO: Waiting for pod downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba to disappear Apr 22 22:06:53.997: INFO: Pod downwardapi-volume-e67d3a02-5987-4555-9681-7755a1dcc2ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:53.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-981" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":602,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:29.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-spsf STEP: Creating a pod to test atomic-volume-subpath Apr 22 22:06:29.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-spsf" in namespace "subpath-7376" to be "Succeeded or Failed" Apr 22 22:06:29.723: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19803ms Apr 22 22:06:31.725: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004706451s Apr 22 22:06:33.730: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 4.008971074s Apr 22 22:06:35.734: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 6.01376675s Apr 22 22:06:37.738: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 8.017593664s Apr 22 22:06:39.742: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 10.021287922s Apr 22 22:06:41.746: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 12.025185501s Apr 22 22:06:43.749: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 14.028234113s Apr 22 22:06:45.753: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 16.032168072s Apr 22 22:06:47.758: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 18.036921531s Apr 22 22:06:49.763: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 20.042387327s Apr 22 22:06:51.768: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 22.047230182s Apr 22 22:06:53.772: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Running", Reason="", readiness=true. Elapsed: 24.050969071s Apr 22 22:06:55.776: INFO: Pod "pod-subpath-test-secret-spsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.05576567s STEP: Saw pod success Apr 22 22:06:55.777: INFO: Pod "pod-subpath-test-secret-spsf" satisfied condition "Succeeded or Failed" Apr 22 22:06:55.780: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-spsf container test-container-subpath-secret-spsf: STEP: delete the pod Apr 22 22:06:55.795: INFO: Waiting for pod pod-subpath-test-secret-spsf to disappear Apr 22 22:06:55.797: INFO: Pod pod-subpath-test-secret-spsf no longer exists STEP: Deleting pod pod-subpath-test-secret-spsf Apr 22 22:06:55.797: INFO: Deleting pod "pod-subpath-test-secret-spsf" in namespace "subpath-7376" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:55.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7376" for this suite. • [SLOW TEST:26.127 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:36.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4585 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4585 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4585 Apr 22 22:05:36.606: INFO: Found 0 stateful pods, waiting for 1 Apr 22 22:05:46.611: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 22 22:05:46.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:05:47.046: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:05:47.046: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:05:47.046: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:05:47.049: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 22 22:05:57.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:05:57.052: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:05:57.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999487s Apr 22 22:05:58.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996926691s Apr 22 22:05:59.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993556611s Apr 22 22:06:00.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990250612s Apr 22 22:06:01.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.986507971s Apr 22 22:06:02.080: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.98383074s Apr 22 22:06:03.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.980965242s Apr 22 22:06:04.087: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.977463336s Apr 22 22:06:05.092: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.971958339s Apr 22 22:06:06.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 967.190199ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4585 Apr 22 22:06:07.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:06:07.356: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:06:07.356: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:06:07.356: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:06:07.360: INFO: Found 1 stateful pods, waiting for 3 Apr 22 22:06:17.363: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:06:17.363: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 22:06:17.363: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 22 22:06:17.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:06:17.655: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:06:17.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:06:17.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:06:17.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:06:17.901: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:06:17.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:06:17.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:06:17.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 22:06:18.240: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 22:06:18.240: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 22:06:18.240: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 22:06:18.240: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:06:18.243: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 22 22:06:28.249: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:06:28.249: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:06:28.249: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 22 22:06:28.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999499s Apr 22 22:06:29.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996507623s Apr 22 22:06:30.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992234881s Apr 22 22:06:31.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985574902s Apr 22 22:06:32.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982094464s Apr 22 22:06:33.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978862515s Apr 22 22:06:34.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974180587s Apr 22 22:06:35.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.969635781s Apr 22 22:06:36.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966426565s Apr 22 22:06:37.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 963.541612ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4585 Apr 22 22:06:38.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:06:38.829: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:06:38.830: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:06:38.830: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:06:38.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:06:39.077: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:06:39.077: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:06:39.077: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:06:39.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4585 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 22:06:39.320: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 22:06:39.320: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 22:06:39.320: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 22:06:39.320: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 22 22:06:59.334: INFO: Deleting all statefulset in ns statefulset-4585 Apr 22 22:06:59.336: INFO: Scaling statefulset ss to 0 Apr 22 22:06:59.345: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 22:06:59.347: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:06:59.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4585" for this suite. • [SLOW TEST:82.791 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":18,"skipped":334,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:36.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0422 22:05:36.268613 26 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2048" for this suite. • [SLOW TEST:84.050 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":20,"skipped":423,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:38.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-dlt9 STEP: Creating a pod to test atomic-volume-subpath Apr 22 22:06:38.163: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dlt9" in namespace "subpath-6430" to be "Succeeded or Failed" Apr 22 22:06:38.165: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.980873ms Apr 22 22:06:40.169: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005584238s Apr 22 22:06:42.173: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.009430282s Apr 22 22:06:44.177: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.013437154s Apr 22 22:06:46.181: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.017251716s Apr 22 22:06:48.184: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.02048087s Apr 22 22:06:50.190: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.026819559s Apr 22 22:06:52.194: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.030262091s Apr 22 22:06:54.198: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.035093714s Apr 22 22:06:56.201: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.037601982s Apr 22 22:06:58.205: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.041674248s Apr 22 22:07:00.210: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.047058022s Apr 22 22:07:02.214: INFO: Pod "pod-subpath-test-configmap-dlt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050571742s STEP: Saw pod success Apr 22 22:07:02.214: INFO: Pod "pod-subpath-test-configmap-dlt9" satisfied condition "Succeeded or Failed" Apr 22 22:07:02.216: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-dlt9 container test-container-subpath-configmap-dlt9: STEP: delete the pod Apr 22 22:07:02.244: INFO: Waiting for pod pod-subpath-test-configmap-dlt9 to disappear Apr 22 22:07:02.246: INFO: Pod pod-subpath-test-configmap-dlt9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-dlt9 Apr 22 22:07:02.246: INFO: Deleting pod "pod-subpath-test-configmap-dlt9" in namespace "subpath-6430" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:02.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6430" for this suite. • [SLOW TEST:24.141 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":588,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":26,"skipped":433,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:23.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Apr 22 22:06:23.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 create -f -' Apr 22 22:06:24.184: INFO: stderr: "" Apr 22 22:06:24.184: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:06:24.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:24.351: INFO: stderr: "" Apr 22 22:06:24.351: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " Apr 22 22:06:24.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:24.513: INFO: stderr: "" Apr 22 22:06:24.513: INFO: stdout: "" Apr 22 22:06:24.513: INFO: update-demo-nautilus-q9k22 is created but not running Apr 22 22:06:29.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:29.681: INFO: stderr: "" Apr 22 22:06:29.681: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " Apr 22 22:06:29.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:29.845: INFO: stderr: "" Apr 22 22:06:29.846: INFO: stdout: "" Apr 22 22:06:29.846: INFO: update-demo-nautilus-q9k22 is created but not running Apr 22 22:06:34.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:35.037: INFO: stderr: "" Apr 22 22:06:35.037: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " Apr 22 22:06:35.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:35.199: INFO: stderr: "" Apr 22 22:06:35.199: INFO: stdout: "true" Apr 22 22:06:35.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:06:35.386: INFO: stderr: "" Apr 22 22:06:35.386: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:06:35.386: INFO: validating pod update-demo-nautilus-q9k22 Apr 22 22:06:35.413: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:06:35.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:06:35.413: INFO: update-demo-nautilus-q9k22 is verified up and running Apr 22 22:06:35.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-sf2fb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:35.596: INFO: stderr: "" Apr 22 22:06:35.596: INFO: stdout: "true" Apr 22 22:06:35.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-sf2fb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:06:35.774: INFO: stderr: "" Apr 22 22:06:35.774: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:06:35.774: INFO: validating pod update-demo-nautilus-sf2fb Apr 22 22:06:35.778: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:06:35.778: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:06:35.778: INFO: update-demo-nautilus-sf2fb is verified up and running STEP: scaling down the replication controller Apr 22 22:06:35.788: INFO: scanned /root for discovery docs: Apr 22 22:06:35.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Apr 22 22:06:36.014: INFO: stderr: "" Apr 22 22:06:36.014: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:06:36.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:36.208: INFO: stderr: "" Apr 22 22:06:36.208: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 22 22:06:41.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:41.400: INFO: stderr: "" Apr 22 22:06:41.400: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 22 22:06:46.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:46.615: INFO: stderr: "" Apr 22 22:06:46.615: INFO: stdout: "update-demo-nautilus-q9k22 update-demo-nautilus-sf2fb " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 22 22:06:51.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:51.796: INFO: stderr: "" Apr 22 22:06:51.796: INFO: stdout: "update-demo-nautilus-q9k22 " Apr 22 22:06:51.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:51.955: INFO: stderr: "" Apr 22 22:06:51.955: INFO: stdout: "true" Apr 22 22:06:51.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:06:52.115: INFO: stderr: "" Apr 22 22:06:52.115: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:06:52.116: INFO: validating pod update-demo-nautilus-q9k22 Apr 22 22:06:52.119: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:06:52.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:06:52.119: INFO: update-demo-nautilus-q9k22 is verified up and running STEP: scaling up the replication controller Apr 22 22:06:52.128: INFO: scanned /root for discovery docs: Apr 22 22:06:52.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Apr 22 22:06:52.344: INFO: stderr: "" Apr 22 22:06:52.344: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 22:06:52.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:52.520: INFO: stderr: "" Apr 22 22:06:52.520: INFO: stdout: "update-demo-nautilus-brktg update-demo-nautilus-q9k22 " Apr 22 22:06:52.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-brktg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:52.695: INFO: stderr: "" Apr 22 22:06:52.695: INFO: stdout: "" Apr 22 22:06:52.695: INFO: update-demo-nautilus-brktg is created but not running Apr 22 22:06:57.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:06:57.870: INFO: stderr: "" Apr 22 22:06:57.870: INFO: stdout: "update-demo-nautilus-brktg update-demo-nautilus-q9k22 " Apr 22 22:06:57.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-brktg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:06:58.041: INFO: stderr: "" Apr 22 22:06:58.041: INFO: stdout: "" Apr 22 22:06:58.041: INFO: update-demo-nautilus-brktg is created but not running Apr 22 22:07:03.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 22:07:03.224: INFO: stderr: "" Apr 22 22:07:03.225: INFO: stdout: "update-demo-nautilus-brktg update-demo-nautilus-q9k22 " Apr 22 22:07:03.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-brktg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:07:03.394: INFO: stderr: "" Apr 22 22:07:03.394: INFO: stdout: "true" Apr 22 22:07:03.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-brktg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:07:03.552: INFO: stderr: "" Apr 22 22:07:03.552: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:07:03.552: INFO: validating pod update-demo-nautilus-brktg Apr 22 22:07:03.556: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:07:03.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:07:03.556: INFO: update-demo-nautilus-brktg is verified up and running Apr 22 22:07:03.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 22:07:03.729: INFO: stderr: "" Apr 22 22:07:03.729: INFO: stdout: "true" Apr 22 22:07:03.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods update-demo-nautilus-q9k22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 22:07:03.896: INFO: stderr: "" Apr 22 22:07:03.896: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 22 22:07:03.896: INFO: validating pod update-demo-nautilus-q9k22 Apr 22 22:07:03.899: INFO: got data: { "image": "nautilus.jpg" } Apr 22 22:07:03.899: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 22:07:03.899: INFO: update-demo-nautilus-q9k22 is verified up and running STEP: using delete to clean up resources Apr 22 22:07:03.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 delete --grace-period=0 --force -f -' Apr 22 22:07:04.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:04.034: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 22:07:04.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get rc,svc -l name=update-demo --no-headers' Apr 22 22:07:04.249: INFO: stderr: "No resources found in kubectl-3591 namespace.\n" Apr 22 22:07:04.249: INFO: stdout: "" Apr 22 22:07:04.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3591 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 22:07:04.446: INFO: stderr: "" Apr 22 22:07:04.446: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:04.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3591" for this suite. • [SLOW TEST:40.675 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":27,"skipped":433,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:55.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Apr 22 22:06:55.893: INFO: Waiting up to 5m0s for pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77" in namespace "var-expansion-5716" to be "Succeeded or Failed" Apr 22 22:06:55.895: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090058ms Apr 22 22:06:57.897: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00472748s Apr 22 22:06:59.901: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008239711s Apr 22 22:07:01.906: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013206851s Apr 22 22:07:03.909: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016021972s Apr 22 22:07:05.913: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020606397s STEP: Saw pod success Apr 22 22:07:05.913: INFO: Pod "var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77" satisfied condition "Succeeded or Failed" Apr 22 22:07:05.916: INFO: Trying to get logs from node node1 pod var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77 container dapi-container: STEP: delete the pod Apr 22 22:07:05.929: INFO: Waiting for pod var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77 to disappear Apr 22 22:07:05.930: INFO: Pod var-expansion-4e23d979-5d2c-4bcc-b257-ae00df54ac77 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:05.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5716" for this suite. • [SLOW TEST:10.078 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:00.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Apr 22 22:07:00.346: INFO: Waiting up to 5m0s for pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453" in namespace "var-expansion-2570" to be "Succeeded or Failed" Apr 22 22:07:00.349: INFO: Pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.779633ms Apr 22 22:07:02.354: INFO: Pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00739897s Apr 22 22:07:04.357: INFO: Pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011267501s Apr 22 22:07:06.362: INFO: Pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015410401s STEP: Saw pod success Apr 22 22:07:06.362: INFO: Pod "var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453" satisfied condition "Succeeded or Failed" Apr 22 22:07:06.364: INFO: Trying to get logs from node node2 pod var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453 container dapi-container: STEP: delete the pod Apr 22 22:07:06.377: INFO: Waiting for pod var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453 to disappear Apr 22 22:07:06.379: INFO: Pod var-expansion-580c28b8-e836-4d87-8cfb-b1a365d42453 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:06.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2570" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":433,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:05.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Apr 22 22:07:06.003: INFO: Waiting up to 5m0s for pod "client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1" in namespace "containers-9832" to be "Succeeded or Failed" Apr 22 22:07:06.005: INFO: Pod "client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033787ms Apr 22 22:07:08.011: INFO: Pod "client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007935926s Apr 22 22:07:10.015: INFO: Pod "client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01243969s STEP: Saw pod success Apr 22 22:07:10.015: INFO: Pod "client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1" satisfied condition "Succeeded or Failed" Apr 22 22:07:10.017: INFO: Trying to get logs from node node2 pod client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1 container agnhost-container: STEP: delete the pod Apr 22 22:07:10.059: INFO: Waiting for pod client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1 to disappear Apr 22 22:07:10.062: INFO: Pod client-containers-c69c633d-c831-4675-b901-620d1e7a7ce1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:10.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9832" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":602,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":24,"skipped":425,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:02:11.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0422 22:02:12.015654 34 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:12.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7006" for this suite. • [SLOW TEST:300.046 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":25,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:06.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:07:06.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b" in namespace "projected-9030" to be "Succeeded or Failed" Apr 22 22:07:06.462: INFO: Pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718378ms Apr 22 22:07:08.466: INFO: Pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007313074s Apr 22 22:07:10.470: INFO: Pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011704378s Apr 22 22:07:12.474: INFO: Pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015947985s STEP: Saw pod success Apr 22 22:07:12.475: INFO: Pod "downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b" satisfied condition "Succeeded or Failed" Apr 22 22:07:12.478: INFO: Trying to get logs from node node2 pod downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b container client-container: STEP: delete the pod Apr 22 22:07:12.490: INFO: Waiting for pod downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b to disappear Apr 22 22:07:12.492: INFO: Pod downwardapi-volume-c4fb7619-3b51-4537-b6af-dc9eebcb2f5b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:12.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9030" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":457,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:12.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:12.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2796" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":23,"skipped":471,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:12.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:13.194: INFO: Checking APIGroup: apiregistration.k8s.io Apr 22 22:07:13.196: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Apr 22 22:07:13.196: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.196: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Apr 22 22:07:13.196: INFO: Checking APIGroup: apps Apr 22 22:07:13.196: INFO: PreferredVersion.GroupVersion: apps/v1 Apr 22 22:07:13.196: INFO: Versions found [{apps/v1 v1}] Apr 22 22:07:13.196: INFO: apps/v1 matches apps/v1 Apr 22 22:07:13.196: INFO: Checking APIGroup: events.k8s.io Apr 22 22:07:13.197: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Apr 22 22:07:13.197: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.197: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Apr 22 22:07:13.197: INFO: Checking APIGroup: authentication.k8s.io Apr 22 22:07:13.198: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Apr 22 22:07:13.198: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.198: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Apr 22 22:07:13.198: INFO: Checking APIGroup: authorization.k8s.io Apr 22 22:07:13.199: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Apr 22 22:07:13.199: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.199: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Apr 22 22:07:13.199: INFO: Checking APIGroup: autoscaling Apr 22 22:07:13.199: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Apr 22 22:07:13.199: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Apr 22 22:07:13.199: INFO: autoscaling/v1 matches autoscaling/v1 Apr 22 22:07:13.199: INFO: Checking APIGroup: batch Apr 22 22:07:13.200: INFO: PreferredVersion.GroupVersion: batch/v1 Apr 22 22:07:13.200: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Apr 22 22:07:13.200: INFO: batch/v1 matches batch/v1 Apr 22 22:07:13.200: INFO: Checking APIGroup: certificates.k8s.io Apr 22 22:07:13.201: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Apr 22 22:07:13.201: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.201: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Apr 22 22:07:13.201: INFO: Checking APIGroup: networking.k8s.io Apr 22 22:07:13.202: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Apr 22 22:07:13.202: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.202: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Apr 22 22:07:13.202: INFO: Checking APIGroup: extensions Apr 22 22:07:13.203: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Apr 22 22:07:13.203: INFO: Versions found [{extensions/v1beta1 v1beta1}] Apr 22 22:07:13.203: INFO: extensions/v1beta1 matches extensions/v1beta1 Apr 22 22:07:13.203: INFO: Checking APIGroup: policy Apr 22 22:07:13.203: INFO: PreferredVersion.GroupVersion: policy/v1 Apr 22 22:07:13.203: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Apr 22 22:07:13.203: INFO: policy/v1 matches policy/v1 Apr 22 22:07:13.203: INFO: Checking APIGroup: rbac.authorization.k8s.io Apr 22 22:07:13.204: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Apr 22 22:07:13.204: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.204: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Apr 22 22:07:13.204: INFO: Checking APIGroup: storage.k8s.io Apr 22 22:07:13.205: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Apr 22 22:07:13.205: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.205: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Apr 22 22:07:13.205: INFO: Checking APIGroup: admissionregistration.k8s.io Apr 22 22:07:13.206: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Apr 22 22:07:13.206: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.206: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Apr 22 22:07:13.206: INFO: Checking APIGroup: apiextensions.k8s.io Apr 22 22:07:13.207: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Apr 22 22:07:13.207: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.207: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Apr 22 22:07:13.207: INFO: Checking APIGroup: scheduling.k8s.io Apr 22 22:07:13.208: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Apr 22 22:07:13.208: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.208: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Apr 22 22:07:13.208: INFO: Checking APIGroup: coordination.k8s.io Apr 22 22:07:13.208: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Apr 22 22:07:13.208: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.209: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Apr 22 22:07:13.209: INFO: Checking APIGroup: node.k8s.io Apr 22 22:07:13.209: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Apr 22 22:07:13.209: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.209: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Apr 22 22:07:13.209: INFO: Checking APIGroup: discovery.k8s.io Apr 22 22:07:13.210: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Apr 22 22:07:13.210: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.210: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Apr 22 22:07:13.210: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Apr 22 22:07:13.211: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Apr 22 22:07:13.211: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.211: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Apr 22 22:07:13.211: INFO: Checking APIGroup: intel.com Apr 22 22:07:13.212: INFO: PreferredVersion.GroupVersion: intel.com/v1 Apr 22 22:07:13.212: INFO: Versions found [{intel.com/v1 v1}] Apr 22 22:07:13.212: INFO: intel.com/v1 matches intel.com/v1 Apr 22 22:07:13.212: INFO: Checking APIGroup: k8s.cni.cncf.io Apr 22 22:07:13.213: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Apr 22 22:07:13.213: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Apr 22 22:07:13.213: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Apr 22 22:07:13.213: INFO: Checking APIGroup: monitoring.coreos.com Apr 22 22:07:13.214: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Apr 22 22:07:13.214: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Apr 22 22:07:13.214: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Apr 22 22:07:13.214: INFO: Checking APIGroup: telemetry.intel.com Apr 22 22:07:13.215: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Apr 22 22:07:13.215: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Apr 22 22:07:13.215: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Apr 22 22:07:13.215: INFO: Checking APIGroup: custom.metrics.k8s.io Apr 22 22:07:13.216: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Apr 22 22:07:13.216: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Apr 22 22:07:13.216: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:13.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7391" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":24,"skipped":503,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:10.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 22:07:16.160: INFO: DNS probes using dns-4719/dns-test-25fe82e1-a044-4b83-974d-178eb1ccf4f9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:16.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4719" for this suite. • [SLOW TEST:6.081 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":33,"skipped":614,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:12.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:12.139: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e325b48d-c628-4830-9901-2fa4675e62e3", Controller:(*bool)(0xc001392cd2), BlockOwnerDeletion:(*bool)(0xc001392cd3)}} Apr 22 22:07:12.143: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d2581472-0ae0-41bb-84be-76b9f0c73fb5", Controller:(*bool)(0xc001392f82), BlockOwnerDeletion:(*bool)(0xc001392f83)}} Apr 22 22:07:12.151: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"29ed13d1-779a-4f70-ad44-665238ae8033", Controller:(*bool)(0xc0013931da), BlockOwnerDeletion:(*bool)(0xc0013931db)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:17.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4483" for this suite. • [SLOW TEST:5.087 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":26,"skipped":451,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:13.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 22 22:07:13.269: INFO: Waiting up to 5m0s for pod "pod-25694993-0c04-49d5-ad1f-ad1895da3dc1" in namespace "emptydir-7605" to be "Succeeded or Failed" Apr 22 22:07:13.271: INFO: Pod "pod-25694993-0c04-49d5-ad1f-ad1895da3dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04277ms Apr 22 22:07:15.276: INFO: Pod "pod-25694993-0c04-49d5-ad1f-ad1895da3dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006769695s Apr 22 22:07:17.279: INFO: Pod "pod-25694993-0c04-49d5-ad1f-ad1895da3dc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009920573s STEP: Saw pod success Apr 22 22:07:17.279: INFO: Pod "pod-25694993-0c04-49d5-ad1f-ad1895da3dc1" satisfied condition "Succeeded or Failed" Apr 22 22:07:17.281: INFO: Trying to get logs from node node1 pod pod-25694993-0c04-49d5-ad1f-ad1895da3dc1 container test-container: STEP: delete the pod Apr 22 22:07:17.292: INFO: Waiting for pod pod-25694993-0c04-49d5-ad1f-ad1895da3dc1 to disappear Apr 22 22:07:17.294: INFO: Pod pod-25694993-0c04-49d5-ad1f-ad1895da3dc1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:17.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7605" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":508,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:54.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1245 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1245 STEP: creating replication controller externalsvc in namespace services-1245 I0422 22:06:54.064466 24 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1245, replica count: 2 I0422 22:06:57.115587 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:07:00.115899 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 22 22:07:00.126: INFO: Creating new exec pod Apr 22 22:07:06.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1245 exec execpoddqv4b -- /bin/sh -x -c nslookup clusterip-service.services-1245.svc.cluster.local' Apr 22 22:07:06.414: INFO: stderr: "+ nslookup clusterip-service.services-1245.svc.cluster.local\n" Apr 22 22:07:06.414: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-1245.svc.cluster.local\tcanonical name = externalsvc.services-1245.svc.cluster.local.\nName:\texternalsvc.services-1245.svc.cluster.local\nAddress: 10.233.47.141\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1245, will wait for the garbage collector to delete the pods Apr 22 22:07:06.473: INFO: Deleting ReplicationController externalsvc took: 7.099198ms Apr 22 22:07:06.575: INFO: Terminating ReplicationController externalsvc pods took: 101.170075ms Apr 22 22:07:17.985: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:17.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1245" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.986 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":34,"skipped":605,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:02.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 22 22:07:02.300: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:04.304: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:06.305: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 22 22:07:06.320: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:08.324: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:10.323: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 22 22:07:10.336: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:07:10.339: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:07:12.340: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:07:12.342: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:07:14.340: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:07:14.343: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:07:16.341: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:07:16.344: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 22:07:18.340: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 22:07:18.343: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1146" for this suite. • [SLOW TEST:16.083 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":591,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:48.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4590 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4590 STEP: creating replication controller externalsvc in namespace services-4590 I0422 22:06:48.992848 31 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4590, replica count: 2 I0422 22:06:52.043866 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:06:55.045425 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 22 22:06:55.059: INFO: Creating new exec pod Apr 22 22:07:05.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4590 exec execpodtnmz8 -- /bin/sh -x -c nslookup nodeport-service.services-4590.svc.cluster.local' Apr 22 22:07:05.356: INFO: stderr: "+ nslookup nodeport-service.services-4590.svc.cluster.local\n" Apr 22 22:07:05.356: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-4590.svc.cluster.local\tcanonical name = externalsvc.services-4590.svc.cluster.local.\nName:\texternalsvc.services-4590.svc.cluster.local\nAddress: 10.233.6.226\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4590, will wait for the garbage collector to delete the pods Apr 22 22:07:05.412: INFO: Deleting ReplicationController externalsvc took: 3.497892ms Apr 22 22:07:05.513: INFO: Terminating ReplicationController externalsvc pods took: 100.199678ms Apr 22 22:07:18.522: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:18.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4590" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.581 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":29,"skipped":528,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:17.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 22 22:07:17.604: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:07:17.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:07:19.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262037, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262037, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262037, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262037, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:07:22.636: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:22.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5443" for this suite. STEP: Destroying namespace "webhook-5443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.387 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":26,"skipped":519,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:04.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:07:04.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:07:06.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262024, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262024, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262024, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262024, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:07:09.857: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1247" for this suite. STEP: Destroying namespace "webhook-1247-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.491 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":28,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:17.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 22 22:07:17.228: INFO: Waiting up to 5m0s for pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a" in namespace "downward-api-6690" to be "Succeeded or Failed" Apr 22 22:07:17.231: INFO: Pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344872ms Apr 22 22:07:19.234: INFO: Pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005438512s Apr 22 22:07:21.237: INFO: Pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008118251s Apr 22 22:07:23.240: INFO: Pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011778616s STEP: Saw pod success Apr 22 22:07:23.240: INFO: Pod "downward-api-99e98410-2de8-439d-954f-98031ff4204a" satisfied condition "Succeeded or Failed" Apr 22 22:07:23.242: INFO: Trying to get logs from node node2 pod downward-api-99e98410-2de8-439d-954f-98031ff4204a container dapi-container: STEP: delete the pod Apr 22 22:07:23.254: INFO: Waiting for pod downward-api-99e98410-2de8-439d-954f-98031ff4204a to disappear Apr 22 22:07:23.256: INFO: Pod downward-api-99e98410-2de8-439d-954f-98031ff4204a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:23.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6690" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":466,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:23.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:23.034: INFO: Got root ca configmap in namespace "svcaccounts-459" Apr 22 22:07:23.037: INFO: Deleted root ca configmap in namespace "svcaccounts-459" STEP: waiting for a new root ca configmap created Apr 22 22:07:23.540: INFO: Recreated root ca configmap in namespace "svcaccounts-459" Apr 22 22:07:23.543: INFO: Updated root ca configmap in namespace "svcaccounts-459" STEP: waiting for the root ca configmap reconciled Apr 22 22:07:24.048: INFO: Reconciled root ca configmap in namespace "svcaccounts-459" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:24.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-459" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":29,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:18.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Apr 22 22:07:24.423: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1879 PodName:pod-sharedvolume-e07aa2df-d781-463e-92d1-3e6fe7618d51 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:07:24.423: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:07:24.721: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:24.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1879" for this suite. • [SLOW TEST:6.342 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":29,"skipped":609,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:24.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:07:24.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac" in namespace "projected-5569" to be "Succeeded or Failed" Apr 22 22:07:24.150: INFO: Pod "downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708752ms Apr 22 22:07:26.153: INFO: Pod "downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007708699s Apr 22 22:07:28.157: INFO: Pod "downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011167647s STEP: Saw pod success Apr 22 22:07:28.157: INFO: Pod "downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac" satisfied condition "Succeeded or Failed" Apr 22 22:07:28.159: INFO: Trying to get logs from node node2 pod downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac container client-container: STEP: delete the pod Apr 22 22:07:28.184: INFO: Waiting for pod downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac to disappear Apr 22 22:07:28.187: INFO: Pod downwardapi-volume-6a0439ae-56b5-49cb-a404-951159438aac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:28.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5569" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":498,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:18.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 22 22:07:18.581: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 48635 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:07:18.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 48636 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:07:18.581: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 48637 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 22 22:07:28.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 49001 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:07:28.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 49002 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 22:07:28.602: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2105 3539e454-b365-4ea2-a793-13a0d32d2e73 49003 0 2022-04-22 22:07:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-22 22:07:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:28.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2105" for this suite. • [SLOW TEST:10.061 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":30,"skipped":533,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:23.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 22 22:07:23.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce" in namespace "downward-api-9373" to be "Succeeded or Failed" Apr 22 22:07:23.320: INFO: Pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083993ms Apr 22 22:07:25.323: INFO: Pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005517257s Apr 22 22:07:27.327: INFO: Pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009139379s Apr 22 22:07:29.329: INFO: Pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011998221s STEP: Saw pod success Apr 22 22:07:29.330: INFO: Pod "downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce" satisfied condition "Succeeded or Failed" Apr 22 22:07:29.332: INFO: Trying to get logs from node node1 pod downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce container client-container: STEP: delete the pod Apr 22 22:07:29.345: INFO: Waiting for pod downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce to disappear Apr 22 22:07:29.347: INFO: Pod downwardapi-volume-ab6806d7-84c7-450a-9097-d46d1a5779ce no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:29.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9373" for this suite. • [SLOW TEST:6.068 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":477,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:16.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:16.208: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 22:07:24.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5183 --namespace=crd-publish-openapi-5183 create -f -' Apr 22 22:07:25.352: INFO: stderr: "" Apr 22 22:07:25.352: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 22:07:25.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5183 --namespace=crd-publish-openapi-5183 delete e2e-test-crd-publish-openapi-831-crds test-cr' Apr 22 22:07:25.531: INFO: stderr: "" Apr 22 22:07:25.531: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 22 22:07:25.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5183 --namespace=crd-publish-openapi-5183 apply -f -' Apr 22 22:07:25.919: INFO: stderr: "" Apr 22 22:07:25.919: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 22:07:25.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5183 --namespace=crd-publish-openapi-5183 delete e2e-test-crd-publish-openapi-831-crds test-cr' Apr 22 22:07:26.070: INFO: stderr: "" Apr 22 22:07:26.070: INFO: stdout: "e2e-test-crd-publish-openapi-831-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 22 22:07:26.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5183 explain e2e-test-crd-publish-openapi-831-crds' Apr 22 22:07:26.421: INFO: stderr: "" Apr 22 22:07:26.421: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-831-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5183" for this suite. • [SLOW TEST:13.973 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":34,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:24.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Apr 22 22:07:24.783: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Apr 22 22:07:26.795: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Apr 22 22:07:28.803: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:30.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-2423" for this suite. • [SLOW TEST:6.065 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":30,"skipped":620,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:18.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Apr 22 22:07:18.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 22 22:07:18.203: INFO: stderr: "" Apr 22 22:07:18.203: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Apr 22 22:07:18.203: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 22 22:07:18.203: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2576" to be "running and ready, or succeeded" Apr 22 22:07:18.206: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120844ms Apr 22 22:07:20.208: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004955782s Apr 22 22:07:22.211: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007532994s Apr 22 22:07:24.214: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.010321894s Apr 22 22:07:24.214: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 22 22:07:24.214: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 22 22:07:24.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator' Apr 22 22:07:24.374: INFO: stderr: "" Apr 22 22:07:24.374: INFO: stdout: "I0422 22:07:21.682073 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/64pw 431\nI0422 22:07:21.882869 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c5g 411\nI0422 22:07:22.083106 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/mhpd 559\nI0422 22:07:22.282484 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/hrg 229\nI0422 22:07:22.482887 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jkq2 270\nI0422 22:07:22.682189 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/fkjg 418\nI0422 22:07:22.882573 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/8sx 467\nI0422 22:07:23.082943 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/jbrf 304\nI0422 22:07:23.282151 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/nf5 456\nI0422 22:07:23.482478 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jmq 552\nI0422 22:07:23.682855 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/5m82 591\nI0422 22:07:23.882088 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/8t4q 547\nI0422 22:07:24.082471 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/v4j 396\nI0422 22:07:24.282798 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/klx 219\n" STEP: limiting log lines Apr 22 22:07:24.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator --tail=1' Apr 22 22:07:24.537: INFO: stderr: "" Apr 22 22:07:24.537: INFO: stdout: "I0422 22:07:24.483064 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/dpk 395\n" Apr 22 22:07:24.537: INFO: got output "I0422 22:07:24.483064 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/dpk 395\n" STEP: limiting log bytes Apr 22 22:07:24.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator --limit-bytes=1' Apr 22 22:07:24.690: INFO: stderr: "" Apr 22 22:07:24.690: INFO: stdout: "I" Apr 22 22:07:24.690: INFO: got output "I" STEP: exposing timestamps Apr 22 22:07:24.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator --tail=1 --timestamps' Apr 22 22:07:24.858: INFO: stderr: "" Apr 22 22:07:24.858: INFO: stdout: "2022-04-22T22:07:24.682506342Z I0422 22:07:24.682418 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/wk5 378\n" Apr 22 22:07:24.858: INFO: got output "2022-04-22T22:07:24.682506342Z I0422 22:07:24.682418 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/wk5 378\n" STEP: restricting to a time range Apr 22 22:07:27.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator --since=1s' Apr 22 22:07:27.526: INFO: stderr: "" Apr 22 22:07:27.526: INFO: stdout: "I0422 22:07:26.683010 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/pwb9 419\nI0422 22:07:26.882135 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/kb4 367\nI0422 22:07:27.082537 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/8mc 598\nI0422 22:07:27.282960 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/42sv 422\nI0422 22:07:27.482144 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/4lh 310\n" Apr 22 22:07:27.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 logs logs-generator logs-generator --since=24h' Apr 22 22:07:27.683: INFO: stderr: "" Apr 22 22:07:27.683: INFO: stdout: "I0422 22:07:21.682073 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/64pw 431\nI0422 22:07:21.882869 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c5g 411\nI0422 22:07:22.083106 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/mhpd 559\nI0422 22:07:22.282484 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/hrg 229\nI0422 22:07:22.482887 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jkq2 270\nI0422 22:07:22.682189 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/fkjg 418\nI0422 22:07:22.882573 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/8sx 467\nI0422 22:07:23.082943 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/jbrf 304\nI0422 22:07:23.282151 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/nf5 456\nI0422 22:07:23.482478 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jmq 552\nI0422 22:07:23.682855 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/5m82 591\nI0422 22:07:23.882088 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/8t4q 547\nI0422 22:07:24.082471 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/v4j 396\nI0422 22:07:24.282798 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/klx 219\nI0422 22:07:24.483064 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/dpk 395\nI0422 22:07:24.682418 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/wk5 378\nI0422 22:07:24.882744 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/vhh 304\nI0422 22:07:25.100307 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/rp7d 407\nI0422 22:07:25.282676 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/l74 423\nI0422 22:07:25.487987 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/zjb 448\nI0422 22:07:25.682200 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/fr5k 440\nI0422 22:07:25.882567 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/g5dr 460\nI0422 22:07:26.082920 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/png 290\nI0422 22:07:26.282243 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/jjz 224\nI0422 22:07:26.482681 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/67z 259\nI0422 22:07:26.683010 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/pwb9 419\nI0422 22:07:26.882135 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/kb4 367\nI0422 22:07:27.082537 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/8mc 598\nI0422 22:07:27.282960 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/42sv 422\nI0422 22:07:27.482144 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/4lh 310\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Apr 22 22:07:27.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2576 delete pod logs-generator' Apr 22 22:07:33.620: INFO: stderr: "" Apr 22 22:07:33.620: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:33.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2576" for this suite. • [SLOW TEST:15.586 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":35,"skipped":627,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:28.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Apr 22 22:07:28.259: INFO: The status of Pod pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:30.261: INFO: The status of Pod pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:32.262: INFO: The status of Pod pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:34.262: INFO: The status of Pod pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:36.262: INFO: The status of Pod pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 22 22:07:36.778: INFO: Successfully updated pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0" Apr 22 22:07:36.778: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0" in namespace "pods-4158" to be "terminated due to deadline exceeded" Apr 22 22:07:36.780: INFO: Pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0": Phase="Running", Reason="", readiness=true. Elapsed: 2.097708ms Apr 22 22:07:38.784: INFO: Pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0": Phase="Running", Reason="", readiness=true. Elapsed: 2.00550111s Apr 22 22:07:40.787: INFO: Pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.009329543s Apr 22 22:07:42.791: INFO: Pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.013373909s Apr 22 22:07:42.791: INFO: Pod "pod-update-activedeadlineseconds-facb9bc6-913a-4a35-9a11-3ae20178a8d0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:42.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4158" for this suite. • [SLOW TEST:14.595 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:33.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 22 22:07:33.657: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:42.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-653" for this suite. • [SLOW TEST:9.180 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":36,"skipped":631,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":505,"failed":0} [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:42.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:42.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-78" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":32,"skipped":505,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:59.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 22 22:06:59.404: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 22 22:07:18.185: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:07:26.855: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:46.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8621" for this suite. • [SLOW TEST:46.767 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":19,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:28.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 22 22:07:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2799 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Apr 22 22:07:28.817: INFO: stderr: "" Apr 22 22:07:28.817: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 22 22:07:33.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2799 get pod e2e-test-httpd-pod -o json' Apr 22 22:07:34.042: INFO: stderr: "" Apr 22 22:07:34.042: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.19\\\"\\n ],\\n \\\"mac\\\": \\\"26:46:69:b4:8f:c3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.19\\\"\\n ],\\n \\\"mac\\\": \\\"26:46:69:b4:8f:c3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-04-22T22:07:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2799\",\n \"resourceVersion\": \"49173\",\n \"uid\": \"18fa5a26-3891-4d99-a6b6-cd763e2686e5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-5xt5d\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node1\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-5xt5d\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T22:07:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T22:07:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T22:07:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T22:07:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://504b4aabcf960c7d45cdf731717ddc6b11f393149f25d0ba69b9e425e5367a74\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-22T22:07:31Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.207\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.19\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.19\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-22T22:07:28Z\"\n }\n}\n" STEP: replace the image in the pod Apr 22 22:07:34.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2799 replace -f -' Apr 22 22:07:34.453: INFO: stderr: "" Apr 22 22:07:34.453: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Apr 22 22:07:34.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2799 delete pods e2e-test-httpd-pod' Apr 22 22:07:47.831: INFO: stderr: "" Apr 22 22:07:47.831: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:47.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2799" for this suite. • [SLOW TEST:19.213 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":31,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:29.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Apr 22 22:07:29.407: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Apr 22 22:07:29.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:29.771: INFO: stderr: "" Apr 22 22:07:29.771: INFO: stdout: "service/agnhost-replica created\n" Apr 22 22:07:29.771: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Apr 22 22:07:29.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:30.112: INFO: stderr: "" Apr 22 22:07:30.112: INFO: stdout: "service/agnhost-primary created\n" Apr 22 22:07:30.112: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 22 22:07:30.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:30.452: INFO: stderr: "" Apr 22 22:07:30.452: INFO: stdout: "service/frontend created\n" Apr 22 22:07:30.452: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 22 22:07:30.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:30.783: INFO: stderr: "" Apr 22 22:07:30.783: INFO: stdout: "deployment.apps/frontend created\n" Apr 22 22:07:30.784: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 22 22:07:30.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:31.113: INFO: stderr: "" Apr 22 22:07:31.114: INFO: stdout: "deployment.apps/agnhost-primary created\n" Apr 22 22:07:31.114: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 22 22:07:31.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 create -f -' Apr 22 22:07:31.432: INFO: stderr: "" Apr 22 22:07:31.432: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Apr 22 22:07:31.432: INFO: Waiting for all frontend pods to be Running. Apr 22 22:07:41.484: INFO: Waiting for frontend to serve content. Apr 22 22:07:41.492: INFO: Trying to add a new entry to the guestbook. Apr 22 22:07:42.499: INFO: Verifying that added entry can be retrieved. Apr 22 22:07:42.509: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Apr 22 22:07:47.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:47.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:47.662: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Apr 22 22:07:47.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:47.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:47.798: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Apr 22 22:07:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:47.959: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:47.959: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 22 22:07:47.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:48.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:48.084: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 22 22:07:48.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:48.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:48.213: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Apr 22 22:07:48.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-318 delete --grace-period=0 --force -f -' Apr 22 22:07:48.353: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 22:07:48.353: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:48.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-318" for this suite. • [SLOW TEST:18.974 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":29,"skipped":495,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:30.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:07:30.866: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:07:32.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:07:34.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:07:36.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:07:38.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262050, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:07:41.883: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:41.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7173-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:50.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3390" for this suite. STEP: Destroying namespace "webhook-3390-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.784 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":35,"skipped":676,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:46.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 22 22:07:46.254: INFO: Waiting up to 5m0s for pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97" in namespace "emptydir-8249" to be "Succeeded or Failed" Apr 22 22:07:46.258: INFO: Pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.634633ms Apr 22 22:07:48.261: INFO: Pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006643302s Apr 22 22:07:50.267: INFO: Pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013083924s Apr 22 22:07:52.272: INFO: Pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017344795s STEP: Saw pod success Apr 22 22:07:52.272: INFO: Pod "pod-a08e1529-cf44-4130-831b-2707fb23bc97" satisfied condition "Succeeded or Failed" Apr 22 22:07:52.275: INFO: Trying to get logs from node node2 pod pod-a08e1529-cf44-4130-831b-2707fb23bc97 container test-container: STEP: delete the pod Apr 22 22:07:52.290: INFO: Waiting for pod pod-a08e1529-cf44-4130-831b-2707fb23bc97 to disappear Apr 22 22:07:52.292: INFO: Pod pod-a08e1529-cf44-4130-831b-2707fb23bc97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:52.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8249" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":379,"failed":0} SSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:47.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:47.963: INFO: Creating pod... Apr 22 22:07:47.990: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:48.993: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:49.994: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:50.996: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:51.993: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:52.994: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:53.993: INFO: Pod Quantity: 1 Status: Pending Apr 22 22:07:54.995: INFO: Pod Status: Running Apr 22 22:07:54.995: INFO: Creating service... Apr 22 22:07:55.006: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/DELETE Apr 22 22:07:55.008: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 22 22:07:55.008: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/GET Apr 22 22:07:55.013: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 22 22:07:55.013: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/HEAD Apr 22 22:07:55.015: INFO: http.Client request:HEAD | StatusCode:200 Apr 22 22:07:55.015: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/OPTIONS Apr 22 22:07:55.017: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 22 22:07:55.018: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/PATCH Apr 22 22:07:55.020: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 22 22:07:55.020: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/POST Apr 22 22:07:55.022: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 22 22:07:55.022: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/pods/agnhost/proxy/some/path/with/PUT Apr 22 22:07:55.024: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Apr 22 22:07:55.024: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/DELETE Apr 22 22:07:55.028: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 22 22:07:55.028: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/GET Apr 22 22:07:55.031: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 22 22:07:55.031: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/HEAD Apr 22 22:07:55.034: INFO: http.Client request:HEAD | StatusCode:200 Apr 22 22:07:55.034: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/OPTIONS Apr 22 22:07:55.040: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 22 22:07:55.040: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/PATCH Apr 22 22:07:55.044: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 22 22:07:55.044: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/POST Apr 22 22:07:55.047: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 22 22:07:55.047: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9374/services/test-service/proxy/some/path/with/PUT Apr 22 22:07:55.051: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9374" for this suite. • [SLOW TEST:7.115 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":32,"skipped":599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:50.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Apr 22 22:07:56.604: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1522 pod-service-account-7a2e19cc-56d8-421b-a667-7264e3942904 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 22 22:07:56.865: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1522 pod-service-account-7a2e19cc-56d8-421b-a667-7264e3942904 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 22 22:07:57.099: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1522 pod-service-account-7a2e19cc-56d8-421b-a667-7264e3942904 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:57.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1522" for this suite. • [SLOW TEST:7.280 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":36,"skipped":681,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:05:15.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Apr 22 22:07:16.327: INFO: Successfully updated pod "var-expansion-cc34bfec-1db7-469d-874c-13b970094dde" STEP: waiting for pod running STEP: deleting the pod gracefully Apr 22 22:07:18.331: INFO: Deleting pod "var-expansion-cc34bfec-1db7-469d-874c-13b970094dde" in namespace "var-expansion-7117" Apr 22 22:07:18.336: INFO: Wait up to 5m0s for pod "var-expansion-cc34bfec-1db7-469d-874c-13b970094dde" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:07:58.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7117" for this suite. • [SLOW TEST:162.572 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":38,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:52.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 22 22:07:56.352: INFO: &Pod{ObjectMeta:{send-events-6cbfff40-6657-4329-a20a-b32fd4c28c5a events-701 cf545b77-1e37-47c8-b37c-b76e4bb3c6d4 49829 0 2022-04-22 22:07:52 +0000 UTC map[name:foo time:329646941] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.25" ], "mac": "d6:52:53:aa:4c:13", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.25" ], "mac": "d6:52:53:aa:4c:13", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-04-22 22:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-22 22:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-22 22:07:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5864q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5864q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:07:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:07:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:07:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 22:07:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.25,StartTime:2022-04-22 22:07:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 22:07:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://7f269651c2249697407eb175eee0e80f4f432d467ea02cb573be9bac914ab66b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 22 22:07:58.357: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 22 22:08:00.362: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:00.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-701" for this suite. • [SLOW TEST:8.067 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":21,"skipped":382,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:00.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Apr 22 22:08:00.421: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:00.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7825" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":22,"skipped":388,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:58.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:58.408: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:03.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2107" for this suite. • [SLOW TEST:5.562 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":39,"skipped":451,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:57.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 22:07:57.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 22:07:59.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 22:08:01.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786262077, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 22:08:04.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:04.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3604" for this suite. STEP: Destroying namespace "webhook-3604-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.459 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":37,"skipped":691,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:22.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-4389395f-ec67-4b87-aff9-3c4fe34facc1 in namespace container-probe-8856 Apr 22 22:07:26.792: INFO: Started pod busybox-4389395f-ec67-4b87-aff9-3c4fe34facc1 in namespace container-probe-8856 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 22:07:26.795: INFO: Initial restart count of pod busybox-4389395f-ec67-4b87-aff9-3c4fe34facc1 is 0 Apr 22 22:08:14.892: INFO: Restart count of pod container-probe-8856/busybox-4389395f-ec67-4b87-aff9-3c4fe34facc1 is now 1 (48.096625809s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:14.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8856" for this suite. • [SLOW TEST:52.156 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":541,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:14.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:15.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4521" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":28,"skipped":583,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:03.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:15.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7045" for this suite. • [SLOW TEST:11.097 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":40,"skipped":461,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:48.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Apr 22 22:08:13.494: INFO: EndpointSlice for Service endpointslice-9278/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:23.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9278" for this suite. • [SLOW TEST:35.121 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":30,"skipped":511,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:04.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Apr 22 22:08:04.870: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:06.874: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:08.875: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Apr 22 22:08:08.888: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:10.892: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:12.892: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Apr 22 22:08:12.906: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:14.910: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:16.910: INFO: The status of Pod pod3 is Running (Ready = true) Apr 22 22:08:16.923: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:08:18.927: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Apr 22 22:08:18.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-1489 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:08:18.930: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Apr 22 22:08:19.129: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-1489 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:08:19.129: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Apr 22 22:08:19.211: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-1489 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:08:19.211: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:24.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-1489" for this suite. • [SLOW TEST:19.474 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":697,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:24.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Apr 22 22:08:25.407: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 22 22:08:25.506: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:25.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7531" for this suite. • ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:23.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2369/configmap-test-e8a72445-21d7-4fa4-9b3c-09abf51dfcaf STEP: Creating a pod to test consume configMaps Apr 22 22:08:23.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e" in namespace "configmap-2369" to be "Succeeded or Failed" Apr 22 22:08:23.567: INFO: Pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464804ms Apr 22 22:08:25.571: INFO: Pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006453087s Apr 22 22:08:27.575: INFO: Pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010380248s Apr 22 22:08:29.579: INFO: Pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014597984s STEP: Saw pod success Apr 22 22:08:29.579: INFO: Pod "pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e" satisfied condition "Succeeded or Failed" Apr 22 22:08:29.582: INFO: Trying to get logs from node node2 pod pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e container env-test: STEP: delete the pod Apr 22 22:08:29.602: INFO: Waiting for pod pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e to disappear Apr 22 22:08:29.604: INFO: Pod pod-configmaps-f98fdedb-4daf-4ca6-a3f3-375913b5148e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:29.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2369" for this suite. • [SLOW TEST:6.087 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ Apr 22 22:08:29.642: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":39,"skipped":706,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:25.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:31.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3138" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":40,"skipped":706,"failed":0} Apr 22 22:08:31.590: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:06:23.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-2005 STEP: creating replication controller nodeport-test in namespace services-2005 I0422 22:06:23.728997 28 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2005, replica count: 2 I0422 22:06:26.780179 28 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:06:29.780509 28 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:06:32.782000 28 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:06:32.782: INFO: Creating new exec pod Apr 22 22:06:37.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 22 22:06:38.125: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 22 22:06:38.125: INFO: stdout: "nodeport-test-px4rq" Apr 22 22:06:38.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.51.246 80' Apr 22 22:06:38.366: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.51.246 80\nConnection to 10.233.51.246 80 port [tcp/http] succeeded!\n" Apr 22 22:06:38.366: INFO: stdout: "nodeport-test-7dkll" Apr 22 22:06:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:38.823: INFO: rc: 1 Apr 22 22:06:38.823: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:39.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:40.096: INFO: rc: 1 Apr 22 22:06:40.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:40.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:41.068: INFO: rc: 1 Apr 22 22:06:41.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:41.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:42.153: INFO: rc: 1 Apr 22 22:06:42.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:42.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:43.152: INFO: rc: 1 Apr 22 22:06:43.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:43.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:44.085: INFO: rc: 1 Apr 22 22:06:44.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:44.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:45.081: INFO: rc: 1 Apr 22 22:06:45.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30977 + echo hostName nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:46.061: INFO: rc: 1 Apr 22 22:06:46.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:46.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:47.067: INFO: rc: 1 Apr 22 22:06:47.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:47.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:48.074: INFO: rc: 1 Apr 22 22:06:48.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:48.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:49.069: INFO: rc: 1 Apr 22 22:06:49.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:49.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:50.216: INFO: rc: 1 Apr 22 22:06:50.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:50.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:51.085: INFO: rc: 1 Apr 22 22:06:51.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30977 + echo hostName nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:51.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:52.079: INFO: rc: 1 Apr 22 22:06:52.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:52.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:53.083: INFO: rc: 1 Apr 22 22:06:53.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:53.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:54.271: INFO: rc: 1 Apr 22 22:06:54.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:54.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:55.052: INFO: rc: 1 Apr 22 22:06:55.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:55.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:56.134: INFO: rc: 1 Apr 22 22:06:56.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:56.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:57.089: INFO: rc: 1 Apr 22 22:06:57.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:57.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:58.066: INFO: rc: 1 Apr 22 22:06:58.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:58.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:06:59.070: INFO: rc: 1 Apr 22 22:06:59.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:06:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:00.066: INFO: rc: 1 Apr 22 22:07:00.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:00.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:01.068: INFO: rc: 1 Apr 22 22:07:01.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:01.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:02.105: INFO: rc: 1 Apr 22 22:07:02.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:02.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:03.081: INFO: rc: 1 Apr 22 22:07:03.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:03.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:04.087: INFO: rc: 1 Apr 22 22:07:04.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:04.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:05.107: INFO: rc: 1 Apr 22 22:07:05.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:05.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:06.539: INFO: rc: 1 Apr 22 22:07:06.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:06.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:07.059: INFO: rc: 1 Apr 22 22:07:07.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:07.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:08.059: INFO: rc: 1 Apr 22 22:07:08.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:08.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:09.116: INFO: rc: 1 Apr 22 22:07:09.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:09.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:10.072: INFO: rc: 1 Apr 22 22:07:10.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:11.210: INFO: rc: 1 Apr 22 22:07:11.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:12.060: INFO: rc: 1 Apr 22 22:07:12.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:12.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:13.111: INFO: rc: 1 Apr 22 22:07:13.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:13.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:14.132: INFO: rc: 1 Apr 22 22:07:14.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:14.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:15.341: INFO: rc: 1 Apr 22 22:07:15.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:15.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:16.072: INFO: rc: 1 Apr 22 22:07:16.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:16.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:17.122: INFO: rc: 1 Apr 22 22:07:17.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:17.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:18.052: INFO: rc: 1 Apr 22 22:07:18.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:18.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:19.413: INFO: rc: 1 Apr 22 22:07:19.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:19.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:20.057: INFO: rc: 1 Apr 22 22:07:20.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30977 + echo hostName nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:20.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:21.072: INFO: rc: 1 Apr 22 22:07:21.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:21.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:22.079: INFO: rc: 1 Apr 22 22:07:22.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:22.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:23.220: INFO: rc: 1 Apr 22 22:07:23.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:23.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:24.180: INFO: rc: 1 Apr 22 22:07:24.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:24.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:25.056: INFO: rc: 1 Apr 22 22:07:25.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:25.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:26.066: INFO: rc: 1 Apr 22 22:07:26.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:26.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:27.062: INFO: rc: 1 Apr 22 22:07:27.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:27.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:28.069: INFO: rc: 1 Apr 22 22:07:28.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:28.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:29.185: INFO: rc: 1 Apr 22 22:07:29.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:29.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:30.770: INFO: rc: 1 Apr 22 22:07:30.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:30.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:31.064: INFO: rc: 1 Apr 22 22:07:31.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:31.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:32.292: INFO: rc: 1 Apr 22 22:07:32.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:32.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:33.091: INFO: rc: 1 Apr 22 22:07:33.091: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:33.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:34.080: INFO: rc: 1 Apr 22 22:07:34.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:34.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:35.084: INFO: rc: 1 Apr 22 22:07:35.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:35.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:36.092: INFO: rc: 1 Apr 22 22:07:36.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:36.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:37.072: INFO: rc: 1 Apr 22 22:07:37.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:37.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:38.078: INFO: rc: 1 Apr 22 22:07:38.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:38.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:39.088: INFO: rc: 1 Apr 22 22:07:39.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:39.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:40.056: INFO: rc: 1 Apr 22 22:07:40.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:40.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:41.081: INFO: rc: 1 Apr 22 22:07:41.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:41.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:42.052: INFO: rc: 1 Apr 22 22:07:42.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:42.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:43.090: INFO: rc: 1 Apr 22 22:07:43.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:43.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:44.035: INFO: rc: 1 Apr 22 22:07:44.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:44.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:45.082: INFO: rc: 1 Apr 22 22:07:45.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:46.147: INFO: rc: 1 Apr 22 22:07:46.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:46.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:47.544: INFO: rc: 1 Apr 22 22:07:47.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:47.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:48.200: INFO: rc: 1 Apr 22 22:07:48.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:48.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:49.131: INFO: rc: 1 Apr 22 22:07:49.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:49.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:50.085: INFO: rc: 1 Apr 22 22:07:50.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:50.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:51.082: INFO: rc: 1 Apr 22 22:07:51.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:51.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:52.076: INFO: rc: 1 Apr 22 22:07:52.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:52.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:53.094: INFO: rc: 1 Apr 22 22:07:53.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:53.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:54.064: INFO: rc: 1 Apr 22 22:07:54.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30977 + echo hostName nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:54.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:55.076: INFO: rc: 1 Apr 22 22:07:55.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:55.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:56.193: INFO: rc: 1 Apr 22 22:07:56.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:56.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:57.089: INFO: rc: 1 Apr 22 22:07:57.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:57.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:58.115: INFO: rc: 1 Apr 22 22:07:58.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:58.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:07:59.085: INFO: rc: 1 Apr 22 22:07:59.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:00.086: INFO: rc: 1 Apr 22 22:08:00.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:00.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:01.072: INFO: rc: 1 Apr 22 22:08:01.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:01.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:02.108: INFO: rc: 1 Apr 22 22:08:02.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:02.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:03.080: INFO: rc: 1 Apr 22 22:08:03.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:03.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:04.107: INFO: rc: 1 Apr 22 22:08:04.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:04.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:05.101: INFO: rc: 1 Apr 22 22:08:05.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:05.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:06.346: INFO: rc: 1 Apr 22 22:08:06.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:06.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:07.088: INFO: rc: 1 Apr 22 22:08:07.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:07.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:08.065: INFO: rc: 1 Apr 22 22:08:08.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:08.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:09.092: INFO: rc: 1 Apr 22 22:08:09.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:09.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:10.316: INFO: rc: 1 Apr 22 22:08:10.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:11.062: INFO: rc: 1 Apr 22 22:08:11.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:12.063: INFO: rc: 1 Apr 22 22:08:12.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:12.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:13.074: INFO: rc: 1 Apr 22 22:08:13.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:13.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:14.401: INFO: rc: 1 Apr 22 22:08:14.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:14.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:15.070: INFO: rc: 1 Apr 22 22:08:15.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:15.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:16.067: INFO: rc: 1 Apr 22 22:08:16.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:16.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:17.167: INFO: rc: 1 Apr 22 22:08:17.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:17.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:18.068: INFO: rc: 1 Apr 22 22:08:18.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:18.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:19.149: INFO: rc: 1 Apr 22 22:08:19.149: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:19.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:20.081: INFO: rc: 1 Apr 22 22:08:20.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:20.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:21.094: INFO: rc: 1 Apr 22 22:08:21.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:21.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:22.413: INFO: rc: 1 Apr 22 22:08:22.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:22.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:23.075: INFO: rc: 1 Apr 22 22:08:23.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:23.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:24.058: INFO: rc: 1 Apr 22 22:08:24.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:24.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:25.086: INFO: rc: 1 Apr 22 22:08:25.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:25.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:26.050: INFO: rc: 1 Apr 22 22:08:26.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:26.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:27.081: INFO: rc: 1 Apr 22 22:08:27.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:27.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:28.065: INFO: rc: 1 Apr 22 22:08:28.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:28.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:29.087: INFO: rc: 1 Apr 22 22:08:29.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:29.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:30.513: INFO: rc: 1 Apr 22 22:08:30.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:30.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:31.216: INFO: rc: 1 Apr 22 22:08:31.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:31.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:32.198: INFO: rc: 1 Apr 22 22:08:32.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:32.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:33.071: INFO: rc: 1 Apr 22 22:08:33.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:33.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:34.078: INFO: rc: 1 Apr 22 22:08:34.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:34.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:35.049: INFO: rc: 1 Apr 22 22:08:35.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:35.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:36.054: INFO: rc: 1 Apr 22 22:08:36.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:36.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:37.068: INFO: rc: 1 Apr 22 22:08:37.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:37.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:38.068: INFO: rc: 1 Apr 22 22:08:38.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:38.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:39.074: INFO: rc: 1 Apr 22 22:08:39.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:39.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977' Apr 22 22:08:39.312: INFO: rc: 1 Apr 22 22:08:39.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2005 exec execpodw72k5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30977: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30977 nc: connect to 10.10.190.207 port 30977 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:39.313: FAIL: Unexpected error: <*errors.errorString | 0xc003228030>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30977 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30977 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001fadc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001fadc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001fadc80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2005". STEP: Found 17 events. Apr 22 22:08:39.318: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodw72k5: { } Scheduled: Successfully assigned services-2005/execpodw72k5 to node1 Apr 22 22:08:39.318: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-7dkll: { } Scheduled: Successfully assigned services-2005/nodeport-test-7dkll to node1 Apr 22 22:08:39.318: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-px4rq: { } Scheduled: Successfully assigned services-2005/nodeport-test-px4rq to node2 Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:23 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-7dkll Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:23 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-px4rq Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:25 +0000 UTC - event for nodeport-test-7dkll: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:25 +0000 UTC - event for nodeport-test-7dkll: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 276.27091ms Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:25 +0000 UTC - event for nodeport-test-7dkll: {kubelet node1} Started: Started container nodeport-test Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:25 +0000 UTC - event for nodeport-test-7dkll: {kubelet node1} Created: Created container nodeport-test Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:26 +0000 UTC - event for nodeport-test-px4rq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:26 +0000 UTC - event for nodeport-test-px4rq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 317.149822ms Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:27 +0000 UTC - event for nodeport-test-px4rq: {kubelet node2} Created: Created container nodeport-test Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:28 +0000 UTC - event for nodeport-test-px4rq: {kubelet node2} Started: Started container nodeport-test Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:34 +0000 UTC - event for execpodw72k5: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:35 +0000 UTC - event for execpodw72k5: {kubelet node1} Started: Started container agnhost-container Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:35 +0000 UTC - event for execpodw72k5: {kubelet node1} Created: Created container agnhost-container Apr 22 22:08:39.318: INFO: At 2022-04-22 22:06:35 +0000 UTC - event for execpodw72k5: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 420.869371ms Apr 22 22:08:39.321: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:08:39.321: INFO: execpodw72k5 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:32 +0000 UTC }] Apr 22 22:08:39.321: INFO: nodeport-test-7dkll node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:23 +0000 UTC }] Apr 22 22:08:39.321: INFO: nodeport-test-px4rq node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:06:23 +0000 UTC }] Apr 22 22:08:39.321: INFO: Apr 22 22:08:39.325: INFO: Logging node info for node master1 Apr 22 22:08:39.329: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 50511 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:30 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:30 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:30 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:08:30 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:08:39.329: INFO: Logging kubelet events for node master1 Apr 22 22:08:39.332: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:08:39.357: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:08:39.357: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:08:39.357: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:08:39.357: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:08:39.357: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.357: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:08:39.357: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.357: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:08:39.357: INFO: Container nginx ready: true, restart count 0 Apr 22 22:08:39.357: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.357: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:08:39.357: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:08:39.357: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:08:39.357: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:08:39.357: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.357: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:08:39.484: INFO: Latency metrics for node master1 Apr 22 22:08:39.484: INFO: Logging node info for node master2 Apr 22 22:08:39.486: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 50584 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:38 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:38 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:38 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:08:38 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:08:39.487: INFO: Logging kubelet events for node master2 Apr 22 22:08:39.489: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:08:39.503: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container coredns ready: true, restart count 1 Apr 22 22:08:39.503: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:08:39.503: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:08:39.503: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:08:39.503: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:08:39.503: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:08:39.503: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.503: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:08:39.503: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:08:39.503: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:08:39.503: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.503: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:08:39.591: INFO: Latency metrics for node master2 Apr 22 22:08:39.591: INFO: Logging node info for node master3 Apr 22 22:08:39.594: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 50571 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:37 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:37 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:37 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:08:37 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:08:39.594: INFO: Logging kubelet events for node master3 Apr 22 22:08:39.596: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:08:39.604: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.604: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:08:39.604: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.604: INFO: Container coredns ready: true, restart count 1 Apr 22 22:08:39.604: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.604: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.604: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:08:39.604: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.604: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:08:39.604: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:08:39.604: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:08:39.605: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:08:39.605: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.605: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:08:39.605: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.605: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:08:39.605: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.605: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:08:39.686: INFO: Latency metrics for node master3 Apr 22 22:08:39.686: INFO: Logging node info for node node1 Apr 22 22:08:39.688: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 50535 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:33 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:33 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:33 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:08:33 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:08:39.690: INFO: Logging kubelet events for node node1 Apr 22 22:08:39.692: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:08:39.714: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:08:39.714: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:08:39.714: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container grafana ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:08:39.714: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:08:39.714: INFO: Container collectd ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.714: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:08:39.714: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:08:39.714: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:08:39.714: INFO: externalname-service-dszzl started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:08:39.714: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:08:39.714: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:08:39.714: INFO: affinity-nodeport-timeout-fzmfl started at 2022-04-22 22:07:45 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Apr 22 22:08:39.714: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:08:39.714: INFO: externalname-service-6k4wn started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:08:39.714: INFO: nodeport-test-7dkll started at 2022-04-22 22:06:23 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container nodeport-test ready: true, restart count 0 Apr 22 22:08:39.714: INFO: execpodw72k5 started at 2022-04-22 22:06:32 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:08:39.714: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:08:39.714: INFO: Container discover ready: false, restart count 0 Apr 22 22:08:39.714: INFO: Container init ready: false, restart count 0 Apr 22 22:08:39.714: INFO: Container install ready: false, restart count 0 Apr 22 22:08:39.714: INFO: affinity-nodeport-timeout-5dp8x started at 2022-04-22 22:07:45 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Apr 22 22:08:39.714: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:08:39.714: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:08:39.714: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:08:39.714: INFO: affinity-nodeport-timeout-q44rz started at 2022-04-22 22:07:45 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Apr 22 22:08:39.714: INFO: execpod-affinityt9c2p started at 2022-04-22 22:07:51 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:39.714: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:08:39.714: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:39.714: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:08:39.714: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:08:39.981: INFO: Latency metrics for node node1 Apr 22 22:08:39.981: INFO: Logging node info for node node2 Apr 22 22:08:39.984: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 50515 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:31 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:31 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:08:31 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:08:31 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:08:39.985: INFO: Logging kubelet events for node node2 Apr 22 22:08:39.987: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:08:40.003: INFO: pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 started at 2022-04-22 22:07:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.003: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:08:40.003: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:08:40.003: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:08:40.003: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:08:40.003: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:08:40.003: INFO: Container discover ready: false, restart count 0 Apr 22 22:08:40.004: INFO: Container init ready: false, restart count 0 Apr 22 22:08:40.004: INFO: Container install ready: false, restart count 0 Apr 22 22:08:40.004: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:40.004: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:08:40.004: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:08:40.004: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:08:40.004: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:08:40.004: INFO: var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45 started at 2022-04-22 22:08:15 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container dapi-container ready: true, restart count 0 Apr 22 22:08:40.004: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:08:40.004: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:08:40.004: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:08:40.004: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:08:40.004: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:08:40.004: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:08:40.004: INFO: nodeport-test-px4rq started at 2022-04-22 22:06:23 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container nodeport-test ready: true, restart count 0 Apr 22 22:08:40.004: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:08:40.004: INFO: execpodfhnb2 started at 2022-04-22 22:08:01 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:08:40.004: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:08:40.004: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:08:40.004: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:08:40.004: INFO: Container collectd ready: true, restart count 0 Apr 22 22:08:40.004: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:08:40.004: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:08:40.204: INFO: Latency metrics for node node2 Apr 22 22:08:40.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2005" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.514 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:08:39.313: Unexpected error: <*errors.errorString | 0xc003228030>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30977 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30977 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":18,"skipped":296,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Apr 22 22:08:40.220: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:43.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6845" for this suite. • [SLOW TEST:28.064 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":41,"skipped":462,"failed":0} Apr 22 22:08:43.138: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:42.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:07:42.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 22 22:07:50.400: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:07:50Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:07:50Z]] name:name1 resourceVersion:49695 uid:7db4cdda-2673-4f27-8455-fa00e8826f31] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 22 22:08:00.407: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:08:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:08:00Z]] name:name2 resourceVersion:50009 uid:52599356-0c79-4570-84b6-2f1ec7af68b2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 22 22:08:10.413: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:07:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:08:10Z]] name:name1 resourceVersion:50192 uid:7db4cdda-2673-4f27-8455-fa00e8826f31] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 22 22:08:20.423: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:08:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:08:20Z]] name:name2 resourceVersion:50333 uid:52599356-0c79-4570-84b6-2f1ec7af68b2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 22 22:08:30.430: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:07:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:08:10Z]] name:name1 resourceVersion:50489 uid:7db4cdda-2673-4f27-8455-fa00e8826f31] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 22 22:08:40.437: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-22T22:08:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-22T22:08:20Z]] name:name2 resourceVersion:50596 uid:52599356-0c79-4570-84b6-2f1ec7af68b2] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:50.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9392" for this suite. • [SLOW TEST:68.127 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":37,"skipped":636,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Apr 22 22:08:50.959: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:30.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0f73f940-ed96-4694-8027-ba49140897da STEP: Creating the pod Apr 22 22:07:30.906: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:32.909: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:34.909: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:36.910: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:38.910: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:40.909: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:42.909: INFO: The status of Pod pod-projected-configmaps-94163a63-f7ef-477f-ae45-d42109720d37 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-0f73f940-ed96-4694-8027-ba49140897da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:57.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7531" for this suite. • [SLOW TEST:87.140 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":650,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Apr 22 22:08:58.005: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Apr 22 22:08:19.120: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6144 PodName:var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:08:19.120: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Apr 22 22:08:19.229: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6144 PodName:var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 22:08:19.229: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Apr 22 22:08:19.817: INFO: Successfully updated pod "var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Apr 22 22:08:19.819: INFO: Deleting pod "var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45" in namespace "var-expansion-6144" Apr 22 22:08:19.824: INFO: Wait up to 5m0s for pod "var-expansion-d3ba0146-242c-45b5-8aae-93f2db3edc45" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:08:59.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6144" for this suite. • [SLOW TEST:44.761 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:08:00.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0422 22:08:00.490791 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:10:00.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9496" for this suite. • [SLOW TEST:120.051 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":23,"skipped":400,"failed":0} Apr 22 22:10:00.522: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:42.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6439 Apr 22 22:07:42.947: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 22 22:07:44.951: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 22 22:07:44.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 22 22:07:45.230: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 22 22:07:45.230: INFO: stdout: "iptables" Apr 22 22:07:45.230: INFO: proxyMode: iptables Apr 22 22:07:45.239: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 22 22:07:45.241: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6439 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6439 I0422 22:07:45.259815 27 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6439, replica count: 3 I0422 22:07:48.310659 27 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:07:51.312134 27 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:07:51.320: INFO: Creating new exec pod Apr 22 22:07:56.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 22 22:07:56.681: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 22 22:07:56.681: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:07:56.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.41.123 80' Apr 22 22:07:56.985: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.41.123 80\nConnection to 10.233.41.123 80 port [tcp/http] succeeded!\n" Apr 22 22:07:56.985: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 22:07:56.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:07:57.445: INFO: rc: 1 Apr 22 22:07:57.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:58.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:07:58.691: INFO: rc: 1 Apr 22 22:07:58.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:07:59.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:07:59.717: INFO: rc: 1 Apr 22 22:07:59.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:00.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:00.706: INFO: rc: 1 Apr 22 22:08:00.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:01.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:01.707: INFO: rc: 1 Apr 22 22:08:01.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:02.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:02.693: INFO: rc: 1 Apr 22 22:08:02.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:03.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:03.706: INFO: rc: 1 Apr 22 22:08:03.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:04.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:04.705: INFO: rc: 1 Apr 22 22:08:04.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:05.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:05.717: INFO: rc: 1 Apr 22 22:08:05.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:06.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:06.709: INFO: rc: 1 Apr 22 22:08:06.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:07.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:07.716: INFO: rc: 1 Apr 22 22:08:07.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:08.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:08.713: INFO: rc: 1 Apr 22 22:08:08.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:09.703: INFO: rc: 1 Apr 22 22:08:09.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:10.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:10.684: INFO: rc: 1 Apr 22 22:08:10.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:11.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:11.687: INFO: rc: 1 Apr 22 22:08:11.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:12.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:12.704: INFO: rc: 1 Apr 22 22:08:12.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:13.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:13.711: INFO: rc: 1 Apr 22 22:08:13.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:14.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:14.681: INFO: rc: 1 Apr 22 22:08:14.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:15.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:15.697: INFO: rc: 1 Apr 22 22:08:15.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:16.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:16.679: INFO: rc: 1 Apr 22 22:08:16.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:17.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:17.688: INFO: rc: 1 Apr 22 22:08:17.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:18.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:19.085: INFO: rc: 1 Apr 22 22:08:19.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:19.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:19.698: INFO: rc: 1 Apr 22 22:08:19.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:20.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:20.699: INFO: rc: 1 Apr 22 22:08:20.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:21.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:22.241: INFO: rc: 1 Apr 22 22:08:22.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:22.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:22.720: INFO: rc: 1 Apr 22 22:08:22.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:23.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:23.692: INFO: rc: 1 Apr 22 22:08:23.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:24.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:24.703: INFO: rc: 1 Apr 22 22:08:24.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:25.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:25.705: INFO: rc: 1 Apr 22 22:08:25.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:26.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:26.697: INFO: rc: 1 Apr 22 22:08:26.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:27.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:27.700: INFO: rc: 1 Apr 22 22:08:27.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:28.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:28.695: INFO: rc: 1 Apr 22 22:08:28.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:29.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:29.807: INFO: rc: 1 Apr 22 22:08:29.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:30.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:30.805: INFO: rc: 1 Apr 22 22:08:30.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:31.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:31.815: INFO: rc: 1 Apr 22 22:08:31.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30188 + echo hostName nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:32.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:32.706: INFO: rc: 1 Apr 22 22:08:32.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:33.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:33.712: INFO: rc: 1 Apr 22 22:08:33.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:34.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:34.685: INFO: rc: 1 Apr 22 22:08:34.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:35.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:35.700: INFO: rc: 1 Apr 22 22:08:35.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:36.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:36.693: INFO: rc: 1 Apr 22 22:08:36.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:37.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:37.690: INFO: rc: 1 Apr 22 22:08:37.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:38.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:38.704: INFO: rc: 1 Apr 22 22:08:38.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:39.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:39.686: INFO: rc: 1 Apr 22 22:08:39.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:40.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:40.706: INFO: rc: 1 Apr 22 22:08:40.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:41.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:41.707: INFO: rc: 1 Apr 22 22:08:41.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:42.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:42.688: INFO: rc: 1 Apr 22 22:08:42.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:43.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:43.719: INFO: rc: 1 Apr 22 22:08:43.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:44.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:44.688: INFO: rc: 1 Apr 22 22:08:44.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:45.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:45.803: INFO: rc: 1 Apr 22 22:08:45.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:46.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:46.713: INFO: rc: 1 Apr 22 22:08:46.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:47.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:47.704: INFO: rc: 1 Apr 22 22:08:47.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:48.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:48.724: INFO: rc: 1 Apr 22 22:08:48.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:49.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:49.698: INFO: rc: 1 Apr 22 22:08:49.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:50.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:50.703: INFO: rc: 1 Apr 22 22:08:50.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:51.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:51.685: INFO: rc: 1 Apr 22 22:08:51.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:52.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:52.681: INFO: rc: 1 Apr 22 22:08:52.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:53.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:53.693: INFO: rc: 1 Apr 22 22:08:53.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:54.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:54.678: INFO: rc: 1 Apr 22 22:08:54.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:55.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:55.705: INFO: rc: 1 Apr 22 22:08:55.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:56.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:56.695: INFO: rc: 1 Apr 22 22:08:56.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:57.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:57.714: INFO: rc: 1 Apr 22 22:08:57.714: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:58.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:58.699: INFO: rc: 1 Apr 22 22:08:58.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:59.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:08:59.698: INFO: rc: 1 Apr 22 22:08:59.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:00.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:00.707: INFO: rc: 1 Apr 22 22:09:00.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:01.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:01.690: INFO: rc: 1 Apr 22 22:09:01.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:02.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:02.717: INFO: rc: 1 Apr 22 22:09:02.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:03.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:03.709: INFO: rc: 1 Apr 22 22:09:03.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:04.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:04.699: INFO: rc: 1 Apr 22 22:09:04.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:05.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:05.729: INFO: rc: 1 Apr 22 22:09:05.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:06.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:06.693: INFO: rc: 1 Apr 22 22:09:06.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:07.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:07.697: INFO: rc: 1 Apr 22 22:09:07.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:08.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:08.695: INFO: rc: 1 Apr 22 22:09:08.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:09.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:09.694: INFO: rc: 1 Apr 22 22:09:09.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:10.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:10.698: INFO: rc: 1 Apr 22 22:09:10.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:11.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:11.685: INFO: rc: 1 Apr 22 22:09:11.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:12.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:12.697: INFO: rc: 1 Apr 22 22:09:12.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:13.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:13.690: INFO: rc: 1 Apr 22 22:09:13.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:14.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:14.709: INFO: rc: 1 Apr 22 22:09:14.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:15.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:15.680: INFO: rc: 1 Apr 22 22:09:15.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:16.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:16.694: INFO: rc: 1 Apr 22 22:09:16.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:17.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:17.685: INFO: rc: 1 Apr 22 22:09:17.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:18.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:18.694: INFO: rc: 1 Apr 22 22:09:18.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:19.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:19.706: INFO: rc: 1 Apr 22 22:09:19.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:20.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:20.675: INFO: rc: 1 Apr 22 22:09:20.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:21.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:21.703: INFO: rc: 1 Apr 22 22:09:21.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:22.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:22.769: INFO: rc: 1 Apr 22 22:09:22.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:23.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:24.644: INFO: rc: 1 Apr 22 22:09:24.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:25.710: INFO: rc: 1 Apr 22 22:09:25.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:26.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:27.277: INFO: rc: 1 Apr 22 22:09:27.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:27.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:27.702: INFO: rc: 1 Apr 22 22:09:27.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:28.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:28.792: INFO: rc: 1 Apr 22 22:09:28.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:29.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:30.045: INFO: rc: 1 Apr 22 22:09:30.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:30.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:30.755: INFO: rc: 1 Apr 22 22:09:30.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:31.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:31.711: INFO: rc: 1 Apr 22 22:09:31.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:32.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:32.712: INFO: rc: 1 Apr 22 22:09:32.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:33.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:33.719: INFO: rc: 1 Apr 22 22:09:33.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:34.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:34.681: INFO: rc: 1 Apr 22 22:09:34.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:35.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:35.699: INFO: rc: 1 Apr 22 22:09:35.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:36.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:36.686: INFO: rc: 1 Apr 22 22:09:36.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:37.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:37.689: INFO: rc: 1 Apr 22 22:09:37.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:38.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:38.712: INFO: rc: 1 Apr 22 22:09:38.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:39.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:39.676: INFO: rc: 1 Apr 22 22:09:39.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:40.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:40.712: INFO: rc: 1 Apr 22 22:09:40.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:41.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:41.727: INFO: rc: 1 Apr 22 22:09:41.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:42.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:42.675: INFO: rc: 1 Apr 22 22:09:42.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:43.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:43.693: INFO: rc: 1 Apr 22 22:09:43.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:44.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:44.682: INFO: rc: 1 Apr 22 22:09:44.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:45.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:45.690: INFO: rc: 1 Apr 22 22:09:45.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:46.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:46.686: INFO: rc: 1 Apr 22 22:09:46.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:47.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:47.711: INFO: rc: 1 Apr 22 22:09:47.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:48.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:48.698: INFO: rc: 1 Apr 22 22:09:48.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30188 + echo hostName nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:49.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:49.679: INFO: rc: 1 Apr 22 22:09:49.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:50.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:50.679: INFO: rc: 1 Apr 22 22:09:50.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:51.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:51.708: INFO: rc: 1 Apr 22 22:09:51.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:52.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:52.700: INFO: rc: 1 Apr 22 22:09:52.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:53.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:53.694: INFO: rc: 1 Apr 22 22:09:53.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:54.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:54.709: INFO: rc: 1 Apr 22 22:09:54.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:55.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:55.711: INFO: rc: 1 Apr 22 22:09:55.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:56.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:56.692: INFO: rc: 1 Apr 22 22:09:56.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:57.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:57.700: INFO: rc: 1 Apr 22 22:09:57.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:57.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188' Apr 22 22:09:57.941: INFO: rc: 1 Apr 22 22:09:57.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6439 exec execpod-affinityt9c2p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30188: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30188 nc: connect to 10.10.190.207 port 30188 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:57.941: FAIL: Unexpected error: <*errors.errorString | 0xc0010a7580>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30188 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30188 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0011a3600, 0x77b33d8, 0xc0038eaf20, 0xc00156c780) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001c80c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001c80c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001c80c00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 22 22:09:57.943: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6439, will wait for the garbage collector to delete the pods Apr 22 22:09:58.019: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.349259ms Apr 22 22:09:58.120: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.741056ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6439". STEP: Found 35 events. Apr 22 22:10:07.936: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: { } Scheduled: Successfully assigned services-6439/affinity-nodeport-timeout-5dp8x to node1 Apr 22 22:10:07.936: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: { } Scheduled: Successfully assigned services-6439/affinity-nodeport-timeout-fzmfl to node1 Apr 22 22:10:07.936: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-q44rz: { } Scheduled: Successfully assigned services-6439/affinity-nodeport-timeout-q44rz to node1 Apr 22 22:10:07.937: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityt9c2p: { } Scheduled: Successfully assigned services-6439/execpod-affinityt9c2p to node1 Apr 22 22:10:07.937: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-6439/kube-proxy-mode-detector to node2 Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 289.624384ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:45 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-5dp8x Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:45 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-q44rz Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:45 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-fzmfl Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:45 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:45 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:46 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 328.549117ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:47 +0000 UTC - event for affinity-nodeport-timeout-q44rz: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 310.679069ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:47 +0000 UTC - event for affinity-nodeport-timeout-q44rz: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 617.21308ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: {kubelet node1} Created: Created container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: {kubelet node1} Started: Started container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.053778ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-q44rz: {kubelet node1} Created: Created container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:48 +0000 UTC - event for affinity-nodeport-timeout-q44rz: {kubelet node1} Started: Started container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:49 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: {kubelet node1} Started: Started container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:49 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: {kubelet node1} Created: Created container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:52 +0000 UTC - event for execpod-affinityt9c2p: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:53 +0000 UTC - event for execpod-affinityt9c2p: {kubelet node1} Started: Started container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:53 +0000 UTC - event for execpod-affinityt9c2p: {kubelet node1} Created: Created container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:07:53 +0000 UTC - event for execpod-affinityt9c2p: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 323.199578ms Apr 22 22:10:07.937: INFO: At 2022-04-22 22:09:57 +0000 UTC - event for execpod-affinityt9c2p: {kubelet node1} Killing: Stopping container agnhost-container Apr 22 22:10:07.937: INFO: At 2022-04-22 22:09:58 +0000 UTC - event for affinity-nodeport-timeout-5dp8x: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:09:58 +0000 UTC - event for affinity-nodeport-timeout-fzmfl: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Apr 22 22:10:07.937: INFO: At 2022-04-22 22:09:58 +0000 UTC - event for affinity-nodeport-timeout-q44rz: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Apr 22 22:10:07.939: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:10:07.939: INFO: Apr 22 22:10:07.944: INFO: Logging node info for node master1 Apr 22 22:10:07.946: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 50936 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:07.947: INFO: Logging kubelet events for node master1 Apr 22 22:10:07.949: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:10:07.972: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.972: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:07.972: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:07.972: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:07.972: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:10:07.972: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.973: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:10:07.973: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:07.973: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:10:07.973: INFO: Container nginx ready: true, restart count 0 Apr 22 22:10:07.973: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:07.973: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:07.973: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:07.973: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:10:07.973: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:07.973: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:07.973: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:10:07.973: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:07.973: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:10:08.071: INFO: Latency metrics for node master1 Apr 22 22:10:08.071: INFO: Logging node info for node master2 Apr 22 22:10:08.075: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 50916 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:09:58 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:09:58 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:09:58 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:09:58 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:08.076: INFO: Logging kubelet events for node master2 Apr 22 22:10:08.079: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:10:08.089: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:10:08.089: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:08.089: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:10:08.089: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.089: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:08.089: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:08.089: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:08.089: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:10:08.089: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:08.089: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container coredns ready: true, restart count 1 Apr 22 22:10:08.089: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.089: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:10:08.177: INFO: Latency metrics for node master2 Apr 22 22:10:08.177: INFO: Logging node info for node master3 Apr 22 22:10:08.180: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 50992 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:08.181: INFO: Logging kubelet events for node master3 Apr 22 22:10:08.183: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:10:08.192: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:08.192: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:10:08.192: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:10:08.192: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:10:08.192: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:08.192: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:10:08.192: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:08.192: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.192: INFO: Container coredns ready: true, restart count 1 Apr 22 22:10:08.192: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.192: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.192: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:08.269: INFO: Latency metrics for node master3 Apr 22 22:10:08.269: INFO: Logging node info for node node1 Apr 22 22:10:08.272: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 50956 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:08.273: INFO: Logging kubelet events for node node1 Apr 22 22:10:08.275: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:10:08.288: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.288: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:10:08.288: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:10:08.289: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:10:08.289: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:10:08.289: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container grafana ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:10:08.289: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:08.289: INFO: Container collectd ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.289: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:08.289: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:10:08.289: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:10:08.289: INFO: externalname-service-dszzl started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:10:08.289: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:10:08.289: INFO: externalname-service-6k4wn started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:10:08.289: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:08.289: INFO: Container discover ready: false, restart count 0 Apr 22 22:10:08.289: INFO: Container init ready: false, restart count 0 Apr 22 22:10:08.289: INFO: Container install ready: false, restart count 0 Apr 22 22:10:08.289: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:10:08.289: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.289: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:08.289: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.289: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:08.289: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.289: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:10:08.289: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:10:08.479: INFO: Latency metrics for node node1 Apr 22 22:10:08.479: INFO: Logging node info for node node2 Apr 22 22:10:08.483: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 50941 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:08.484: INFO: Logging kubelet events for node node2 Apr 22 22:10:08.487: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:10:08.499: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:08.499: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:10:08.499: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:08.499: INFO: Container discover ready: false, restart count 0 Apr 22 22:10:08.499: INFO: Container init ready: false, restart count 0 Apr 22 22:10:08.499: INFO: Container install ready: false, restart count 0 Apr 22 22:10:08.499: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.499: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.499: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:08.499: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:08.499: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:10:08.499: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:10:08.499: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.499: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:10:08.499: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:10:08.499: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:10:08.499: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.499: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:10:08.499: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.500: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:10:08.500: INFO: replace-27511089-t4kxj started at 2022-04-22 22:09:00 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.500: INFO: Container c ready: true, restart count 0 Apr 22 22:10:08.500: INFO: execpodfhnb2 started at 2022-04-22 22:08:01 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.500: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:10:08.500: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.500: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:08.500: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:08.500: INFO: Container collectd ready: true, restart count 0 Apr 22 22:10:08.500: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:10:08.500: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.700: INFO: Latency metrics for node node2 Apr 22 22:10:08.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6439" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [145.793 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:09:57.941: Unexpected error: <*errors.errorString | 0xc0010a7580>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30188 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30188 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":519,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Apr 22 22:10:08.715: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:07:55.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5126 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5126 I0422 22:07:55.196197 31 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5126, replica count: 2 I0422 22:07:58.247382 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 22:08:01.249565 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 22:08:01.249: INFO: Creating new exec pod Apr 22 22:08:06.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 22:08:06.519: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 22:08:06.519: INFO: stdout: "externalname-service-dszzl" Apr 22 22:08:06.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.240 80' Apr 22 22:08:06.747: INFO: stderr: "+ nc -v -t -w 2 10.233.22.240 80\n+ echo hostName\nConnection to 10.233.22.240 80 port [tcp/http] succeeded!\n" Apr 22 22:08:06.747: INFO: stdout: "" Apr 22 22:08:07.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.240 80' Apr 22 22:08:08.015: INFO: stderr: "+ nc -v -t -w 2 10.233.22.240 80\n+ echo hostName\nConnection to 10.233.22.240 80 port [tcp/http] succeeded!\n" Apr 22 22:08:08.015: INFO: stdout: "externalname-service-6k4wn" Apr 22 22:08:08.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:08.275: INFO: rc: 1 Apr 22 22:08:08.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:09.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:09.523: INFO: rc: 1 Apr 22 22:08:09.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:10.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:10.533: INFO: rc: 1 Apr 22 22:08:10.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:11.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:11.528: INFO: rc: 1 Apr 22 22:08:11.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:12.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:12.511: INFO: rc: 1 Apr 22 22:08:12.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:13.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:13.537: INFO: rc: 1 Apr 22 22:08:13.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:14.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:14.524: INFO: rc: 1 Apr 22 22:08:14.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:15.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:15.585: INFO: rc: 1 Apr 22 22:08:15.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:16.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:16.627: INFO: rc: 1 Apr 22 22:08:16.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:17.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:17.566: INFO: rc: 1 Apr 22 22:08:17.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:18.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:18.527: INFO: rc: 1 Apr 22 22:08:18.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:19.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:19.518: INFO: rc: 1 Apr 22 22:08:19.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:20.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:20.547: INFO: rc: 1 Apr 22 22:08:20.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:21.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:21.534: INFO: rc: 1 Apr 22 22:08:21.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:22.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:22.518: INFO: rc: 1 Apr 22 22:08:22.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:23.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:23.538: INFO: rc: 1 Apr 22 22:08:23.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:24.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:24.930: INFO: rc: 1 Apr 22 22:08:24.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:25.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:26.084: INFO: rc: 1 Apr 22 22:08:26.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:26.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:27.029: INFO: rc: 1 Apr 22 22:08:27.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:27.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:27.912: INFO: rc: 1 Apr 22 22:08:27.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:28.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:29.519: INFO: rc: 1 Apr 22 22:08:29.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:30.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:30.541: INFO: rc: 1 Apr 22 22:08:30.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:31.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:31.918: INFO: rc: 1 Apr 22 22:08:31.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:32.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:32.522: INFO: rc: 1 Apr 22 22:08:32.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:33.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:33.502: INFO: rc: 1 Apr 22 22:08:33.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:34.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:34.521: INFO: rc: 1 Apr 22 22:08:34.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:35.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:35.542: INFO: rc: 1 Apr 22 22:08:35.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:36.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:36.510: INFO: rc: 1 Apr 22 22:08:36.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:37.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:37.513: INFO: rc: 1 Apr 22 22:08:37.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:38.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:38.534: INFO: rc: 1 Apr 22 22:08:38.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:39.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:39.529: INFO: rc: 1 Apr 22 22:08:39.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:40.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:40.521: INFO: rc: 1 Apr 22 22:08:40.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:41.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:41.533: INFO: rc: 1 Apr 22 22:08:41.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:42.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:42.539: INFO: rc: 1 Apr 22 22:08:42.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:43.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:43.505: INFO: rc: 1 Apr 22 22:08:43.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:44.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:44.527: INFO: rc: 1 Apr 22 22:08:44.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:45.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:45.523: INFO: rc: 1 Apr 22 22:08:45.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:46.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:46.508: INFO: rc: 1 Apr 22 22:08:46.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:47.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:47.516: INFO: rc: 1 Apr 22 22:08:47.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:48.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:48.520: INFO: rc: 1 Apr 22 22:08:48.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:49.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:49.521: INFO: rc: 1 Apr 22 22:08:49.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:50.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:50.518: INFO: rc: 1 Apr 22 22:08:50.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:51.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:51.600: INFO: rc: 1 Apr 22 22:08:51.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:52.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:52.525: INFO: rc: 1 Apr 22 22:08:52.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:53.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:53.502: INFO: rc: 1 Apr 22 22:08:53.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:54.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:54.504: INFO: rc: 1 Apr 22 22:08:54.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:55.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:55.523: INFO: rc: 1 Apr 22 22:08:55.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:56.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:56.525: INFO: rc: 1 Apr 22 22:08:56.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:57.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:57.509: INFO: rc: 1 Apr 22 22:08:57.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:58.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:58.514: INFO: rc: 1 Apr 22 22:08:58.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:08:59.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:08:59.780: INFO: rc: 1 Apr 22 22:08:59.780: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:00.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:00.570: INFO: rc: 1 Apr 22 22:09:00.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:01.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:01.659: INFO: rc: 1 Apr 22 22:09:01.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:02.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:02.545: INFO: rc: 1 Apr 22 22:09:02.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:03.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:03.674: INFO: rc: 1 Apr 22 22:09:03.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:04.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:04.525: INFO: rc: 1 Apr 22 22:09:04.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:05.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:05.521: INFO: rc: 1 Apr 22 22:09:05.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:06.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:06.496: INFO: rc: 1 Apr 22 22:09:06.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:07.543: INFO: rc: 1 Apr 22 22:09:07.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:08.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:08.518: INFO: rc: 1 Apr 22 22:09:08.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:09.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:09.504: INFO: rc: 1 Apr 22 22:09:09.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:10.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:10.513: INFO: rc: 1 Apr 22 22:09:10.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:11.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:11.522: INFO: rc: 1 Apr 22 22:09:11.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:12.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:12.522: INFO: rc: 1 Apr 22 22:09:12.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:13.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:13.503: INFO: rc: 1 Apr 22 22:09:13.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:14.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:14.517: INFO: rc: 1 Apr 22 22:09:14.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:15.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:15.519: INFO: rc: 1 Apr 22 22:09:15.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:16.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:16.514: INFO: rc: 1 Apr 22 22:09:16.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:17.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:17.527: INFO: rc: 1 Apr 22 22:09:17.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:18.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:18.519: INFO: rc: 1 Apr 22 22:09:18.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:19.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:19.507: INFO: rc: 1 Apr 22 22:09:19.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:20.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:20.521: INFO: rc: 1 Apr 22 22:09:20.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:21.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:21.504: INFO: rc: 1 Apr 22 22:09:21.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:22.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:22.518: INFO: rc: 1 Apr 22 22:09:22.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:23.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:23.530: INFO: rc: 1 Apr 22 22:09:23.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:24.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:24.537: INFO: rc: 1 Apr 22 22:09:24.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:25.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:25.517: INFO: rc: 1 Apr 22 22:09:25.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:26.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:26.513: INFO: rc: 1 Apr 22 22:09:26.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:27.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:27.524: INFO: rc: 1 Apr 22 22:09:27.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:28.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:28.510: INFO: rc: 1 Apr 22 22:09:28.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:29.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:29.840: INFO: rc: 1 Apr 22 22:09:29.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:30.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:30.532: INFO: rc: 1 Apr 22 22:09:30.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:31.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:31.513: INFO: rc: 1 Apr 22 22:09:31.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:32.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:32.516: INFO: rc: 1 Apr 22 22:09:32.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:33.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:33.509: INFO: rc: 1 Apr 22 22:09:33.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:34.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:34.530: INFO: rc: 1 Apr 22 22:09:34.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:35.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:35.506: INFO: rc: 1 Apr 22 22:09:35.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:36.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:36.511: INFO: rc: 1 Apr 22 22:09:36.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:37.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:37.527: INFO: rc: 1 Apr 22 22:09:37.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:38.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:38.529: INFO: rc: 1 Apr 22 22:09:38.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:39.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:39.527: INFO: rc: 1 Apr 22 22:09:39.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:40.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:40.526: INFO: rc: 1 Apr 22 22:09:40.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:41.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:41.519: INFO: rc: 1 Apr 22 22:09:41.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:42.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:42.530: INFO: rc: 1 Apr 22 22:09:42.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:43.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:43.519: INFO: rc: 1 Apr 22 22:09:43.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:44.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:44.607: INFO: rc: 1 Apr 22 22:09:44.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:45.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:45.519: INFO: rc: 1 Apr 22 22:09:45.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:46.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:46.525: INFO: rc: 1 Apr 22 22:09:46.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:47.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:47.519: INFO: rc: 1 Apr 22 22:09:47.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:48.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:48.524: INFO: rc: 1 Apr 22 22:09:48.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:49.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:49.525: INFO: rc: 1 Apr 22 22:09:49.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:50.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:50.510: INFO: rc: 1 Apr 22 22:09:50.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:51.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:51.523: INFO: rc: 1 Apr 22 22:09:51.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:52.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:52.520: INFO: rc: 1 Apr 22 22:09:52.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:53.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:53.536: INFO: rc: 1 Apr 22 22:09:53.536: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:54.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:54.523: INFO: rc: 1 Apr 22 22:09:54.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:55.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:55.542: INFO: rc: 1 Apr 22 22:09:55.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:56.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:56.529: INFO: rc: 1 Apr 22 22:09:56.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:57.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:57.528: INFO: rc: 1 Apr 22 22:09:57.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:58.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:58.521: INFO: rc: 1 Apr 22 22:09:58.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:09:59.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:09:59.590: INFO: rc: 1 Apr 22 22:09:59.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:00.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:00.563: INFO: rc: 1 Apr 22 22:10:00.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:01.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:01.754: INFO: rc: 1 Apr 22 22:10:01.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:02.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:02.606: INFO: rc: 1 Apr 22 22:10:02.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:03.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:03.528: INFO: rc: 1 Apr 22 22:10:03.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:04.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:04.530: INFO: rc: 1 Apr 22 22:10:04.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:05.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:05.517: INFO: rc: 1 Apr 22 22:10:05.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:06.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:06.509: INFO: rc: 1 Apr 22 22:10:06.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:07.511: INFO: rc: 1 Apr 22 22:10:07.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:08.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:08.561: INFO: rc: 1 Apr 22 22:10:08.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:08.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351' Apr 22 22:10:08.790: INFO: rc: 1 Apr 22 22:10:08.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5126 exec execpodfhnb2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30351: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30351 + echo hostName nc: connect to 10.10.190.207 port 30351 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 22:10:08.791: FAIL: Unexpected error: <*errors.errorString | 0xc0046e23b0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30351 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30351 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001490f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001490f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001490f00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 22 22:10:08.792: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5126". STEP: Found 17 events. Apr 22 22:10:08.807: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodfhnb2: { } Scheduled: Successfully assigned services-5126/execpodfhnb2 to node2 Apr 22 22:10:08.807: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-6k4wn: { } Scheduled: Successfully assigned services-5126/externalname-service-6k4wn to node1 Apr 22 22:10:08.807: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-dszzl: { } Scheduled: Successfully assigned services-5126/externalname-service-dszzl to node1 Apr 22 22:10:08.807: INFO: At 2022-04-22 22:07:55 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-6k4wn Apr 22 22:10:08.807: INFO: At 2022-04-22 22:07:55 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-dszzl Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-6k4wn: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-6k4wn: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.820999ms Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-6k4wn: {kubelet node1} Started: Started container externalname-service Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-6k4wn: {kubelet node1} Created: Created container externalname-service Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-dszzl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:57 +0000 UTC - event for externalname-service-dszzl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 344.262511ms Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:58 +0000 UTC - event for externalname-service-dszzl: {kubelet node1} Started: Started container externalname-service Apr 22 22:10:08.808: INFO: At 2022-04-22 22:07:58 +0000 UTC - event for externalname-service-dszzl: {kubelet node1} Created: Created container externalname-service Apr 22 22:10:08.808: INFO: At 2022-04-22 22:08:02 +0000 UTC - event for execpodfhnb2: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 22 22:10:08.808: INFO: At 2022-04-22 22:08:02 +0000 UTC - event for execpodfhnb2: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.145161ms Apr 22 22:10:08.808: INFO: At 2022-04-22 22:08:03 +0000 UTC - event for execpodfhnb2: {kubelet node2} Started: Started container agnhost-container Apr 22 22:10:08.808: INFO: At 2022-04-22 22:08:03 +0000 UTC - event for execpodfhnb2: {kubelet node2} Created: Created container agnhost-container Apr 22 22:10:08.810: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 22:10:08.810: INFO: execpodfhnb2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:08:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:08:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:08:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:08:01 +0000 UTC }] Apr 22 22:10:08.810: INFO: externalname-service-6k4wn node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:55 +0000 UTC }] Apr 22 22:10:08.810: INFO: externalname-service-dszzl node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 22:07:55 +0000 UTC }] Apr 22 22:10:08.810: INFO: Apr 22 22:10:08.815: INFO: Logging node info for node master1 Apr 22 22:10:08.817: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 50936 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:08.818: INFO: Logging kubelet events for node master1 Apr 22 22:10:08.820: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 22:10:08.829: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:08.829: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:08.829: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:10:08.829: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container autoscaler ready: true, restart count 2 Apr 22 22:10:08.829: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.829: INFO: Container docker-registry ready: true, restart count 0 Apr 22 22:10:08.829: INFO: Container nginx ready: true, restart count 0 Apr 22 22:10:08.829: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.829: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:08.829: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:08.829: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:10:08.829: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:08.829: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:08.829: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 22:10:08.829: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:08.829: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 22:10:09.291: INFO: Latency metrics for node master1 Apr 22 22:10:09.291: INFO: Logging node info for node master2 Apr 22 22:10:09.294: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 50999 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:09.295: INFO: Logging kubelet events for node master2 Apr 22 22:10:09.298: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 22:10:09.309: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 22:10:09.309: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.309: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:09.309: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:09.309: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 22:10:09.309: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:09.309: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container coredns ready: true, restart count 1 Apr 22 22:10:09.309: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 22:10:09.309: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:09.309: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 22:10:09.309: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.309: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:09.394: INFO: Latency metrics for node master2 Apr 22 22:10:09.394: INFO: Logging node info for node master3 Apr 22 22:10:09.397: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 50992 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:08 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:09.398: INFO: Logging kubelet events for node master3 Apr 22 22:10:09.400: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 22:10:09.408: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.408: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 22:10:09.408: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.408: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 22:10:09.408: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.409: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 22:10:09.409: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.409: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.409: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:09.409: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.409: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 22:10:09.409: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:09.409: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:09.409: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:10:09.409: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.409: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:09.409: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.409: INFO: Container coredns ready: true, restart count 1 Apr 22 22:10:09.493: INFO: Latency metrics for node master3 Apr 22 22:10:09.493: INFO: Logging node info for node node1 Apr 22 22:10:09.496: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 50956 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:05 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:09.497: INFO: Logging kubelet events for node node1 Apr 22 22:10:09.500: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 22:10:09.515: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Init container install-cni ready: true, restart count 2 Apr 22 22:10:09.515: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:10:09.515: INFO: externalname-service-dszzl started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:10:09.515: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:10:09.515: INFO: externalname-service-6k4wn started at 2022-04-22 22:07:55 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container externalname-service ready: true, restart count 0 Apr 22 22:10:09.515: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:09.515: INFO: Container discover ready: false, restart count 0 Apr 22 22:10:09.515: INFO: Container init ready: false, restart count 0 Apr 22 22:10:09.515: INFO: Container install ready: false, restart count 0 Apr 22 22:10:09.515: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:10:09.515: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.515: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.515: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:09.515: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:09.515: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.515: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:10:09.515: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:10:09.515: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:10:09.515: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.515: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:10:09.516: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.516: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:10:09.516: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 22:10:09.516: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:10:09.516: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:10:09.516: INFO: Container grafana ready: true, restart count 0 Apr 22 22:10:09.516: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:10:09.516: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:09.516: INFO: Container collectd ready: true, restart count 0 Apr 22 22:10:09.516: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:10:09.516: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.516: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.516: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:09.711: INFO: Latency metrics for node node1 Apr 22 22:10:09.711: INFO: Logging node info for node node2 Apr 22 22:10:09.713: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 50941 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 20:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 22:10:01 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 22:10:09.714: INFO: Logging kubelet events for node node2 Apr 22 22:10:09.716: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 22:10:09.729: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Init container install-cni ready: true, restart count 0 Apr 22 22:10:09.729: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:10:09.729: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:09.729: INFO: Container discover ready: false, restart count 0 Apr 22 22:10:09.729: INFO: Container init ready: false, restart count 0 Apr 22 22:10:09.729: INFO: Container install ready: false, restart count 0 Apr 22 22:10:09.729: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.729: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.729: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:10:09.729: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:10:09.729: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:10:09.729: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:10:09.729: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 22:10:09.729: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:10:09.729: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:10:09.729: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:10:09.729: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:10:09.729: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:10:09.729: INFO: replace-27511089-t4kxj started at 2022-04-22 22:09:00 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container c ready: true, restart count 0 Apr 22 22:10:09.729: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:10:09.729: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 22:10:09.729: INFO: Container collectd ready: true, restart count 0 Apr 22 22:10:09.729: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:10:09.729: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:10:09.729: INFO: execpodfhnb2 started at 2022-04-22 22:08:01 +0000 UTC (0+1 container statuses recorded) Apr 22 22:10:09.729: INFO: Container agnhost-container ready: true, restart count 0 Apr 22 22:10:09.883: INFO: Latency metrics for node node2 Apr 22 22:10:09.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5126" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [134.744 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:10:08.791: Unexpected error: <*errors.errorString | 0xc0046e23b0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30351 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30351 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":32,"skipped":651,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Apr 22 22:10:09.900: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":29,"skipped":597,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Apr 22 22:08:59.844: INFO: Running AfterSuite actions on all nodes Apr 22 22:10:09.967: INFO: Running AfterSuite actions on node 1 Apr 22 22:10:09.967: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 Ran 320 of 5773 Specs in 761.440 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 12m43.109446139s Test Suite Failed