Running Suite: Kubernetes e2e suite =================================== Random Seed: 1652479087 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 13 21:58:09.096: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.101: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 21:58:09.128: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 21:58:09.190: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 21:58:09.190: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 21:58:09.190: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 21:58:09.190: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 21:58:09.190: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 21:58:09.207: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 13 21:58:09.207: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 13 21:58:09.207: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 13 21:58:09.207: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 13 21:58:09.207: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 13 21:58:09.207: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 13 21:58:09.207: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 13 21:58:09.207: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 21:58:09.207: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 13 21:58:09.207: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 13 21:58:09.207: INFO: e2e test version: v1.21.9 May 13 21:58:09.209: INFO: kube-apiserver version: v1.21.1 May 13 21:58:09.210: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.215: INFO: Cluster IP family: ipv4 SSSS ------------------------------ May 13 21:58:09.221: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.241: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ May 13 21:58:09.236: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.257: INFO: Cluster IP family: ipv4 SS ------------------------------ May 13 21:58:09.239: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.260: INFO: Cluster IP family: ipv4 S ------------------------------ May 13 21:58:09.239: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.262: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 13 21:58:09.245: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.265: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ May 13 21:58:09.253: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.274: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 13 21:58:09.254: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.276: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 13 21:58:09.254: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.278: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 13 21:58:09.259: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:09.281: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0513 21:58:09.328570 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.328: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.330: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-dd0e3ad0-45c6-4595-88b3-2de1b7b13734 STEP: Creating a pod to test consume secrets May 13 21:58:09.348: INFO: Waiting up to 5m0s for pod "pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17" in namespace "secrets-5437" to be "Succeeded or Failed" May 13 21:58:09.351: INFO: Pod "pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557438ms May 13 21:58:11.355: INFO: Pod "pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006620752s May 13 21:58:13.357: INFO: Pod "pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009129947s STEP: Saw pod success May 13 21:58:13.357: INFO: Pod "pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17" satisfied condition "Succeeded or Failed" May 13 21:58:13.360: INFO: Trying to get logs from node node1 pod pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17 container secret-volume-test: STEP: delete the pod May 13 21:58:13.378: INFO: Waiting for pod pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17 to disappear May 13 21:58:13.380: INFO: Pod pod-secrets-e3f34a86-9fa8-4639-8660-5dd8c1413a17 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:13.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5437" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption W0513 21:58:09.369623 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.369: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.371: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-8121 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:15.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-4588" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:15.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8121" for this suite. • [SLOW TEST:6.117 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":1,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:15.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics May 13 21:58:16.593: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 21:58:16.729: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:16.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3185" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":2,"skipped":64,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0513 21:58:09.348403 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.348: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.350: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium May 13 21:58:09.369: INFO: Waiting up to 5m0s for pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63" in namespace "emptydir-7754" to be "Succeeded or Failed" May 13 21:58:09.371: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040241ms May 13 21:58:11.374: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005292147s May 13 21:58:13.377: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00825867s May 13 21:58:15.381: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012053264s May 13 21:58:17.385: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01595403s May 13 21:58:19.389: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019931963s STEP: Saw pod success May 13 21:58:19.389: INFO: Pod "pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63" satisfied condition "Succeeded or Failed" May 13 21:58:19.391: INFO: Trying to get logs from node node2 pod pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63 container test-container: STEP: delete the pod May 13 21:58:19.413: INFO: Waiting for pod pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63 to disappear May 13 21:58:19.415: INFO: Pod pod-1b2d47d4-b14d-481e-8e77-d77ad88d6b63 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:19.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7754" for this suite. • [SLOW TEST:10.104 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment W0513 21:58:09.335565 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.335: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.338: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:09.340: INFO: Creating simple deployment test-new-deployment May 13 21:58:09.348: INFO: deployment "test-new-deployment" doesn't have the required revision set May 13 21:58:11.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:13.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:15.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:17.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:19.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:21.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075889, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 21:58:23.383: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-9828 a75d6295-e31d-4aa8-8d25-78bded7293d2 31852 3 2022-05-13 21:58:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-05-13 21:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 21:58:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bcb238 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-13 21:58:21 +0000 UTC,LastTransitionTime:2022-05-13 21:58:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-05-13 21:58:21 +0000 UTC,LastTransitionTime:2022-05-13 21:58:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 13 21:58:23.386: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-9828 0482bfc3-1b6a-4317-b296-b3041b5377ba 31854 3 2022-05-13 21:58:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment a75d6295-e31d-4aa8-8d25-78bded7293d2 0xc004bcb627 0xc004bcb628}] [] [{kube-controller-manager Update apps/v1 2022-05-13 21:58:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75d6295-e31d-4aa8-8d25-78bded7293d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bcb698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 21:58:23.390: INFO: Pod "test-new-deployment-847dcfb7fb-j49hn" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-j49hn test-new-deployment-847dcfb7fb- deployment-9828 12e16ba8-8884-422e-bccd-c4ed5c8c14d9 31858 0 2022-05-13 21:58:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0482bfc3-1b6a-4317-b296-b3041b5377ba 0xc004bcba2f 0xc004bcba40}] [] [{kube-controller-manager Update v1 2022-05-13 21:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0482bfc3-1b6a-4317-b296-b3041b5377ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-92whp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92whp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 21:58:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 21:58:23.390: INFO: Pod "test-new-deployment-847dcfb7fb-pkvmk" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-pkvmk test-new-deployment-847dcfb7fb- deployment-9828 842bef71-ffb8-4ae4-83f4-6b8d885a2035 31795 0 2022-05-13 21:58:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.213" ], "mac": "82:c0:5e:cb:54:38", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.213" ], "mac": "82:c0:5e:cb:54:38", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 0482bfc3-1b6a-4317-b296-b3041b5377ba 0xc004bcbb9f 0xc004bcbbb0}] [] [{kube-controller-manager Update v1 2022-05-13 21:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0482bfc3-1b6a-4317-b296-b3041b5377ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 21:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 21:58:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-665pl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-665pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 21:58:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 21:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 21:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 21:58:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.213,StartTime:2022-05-13 21:58:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 21:58:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://339ed6f293ec89b9a6b029c4af5bad826874c3e679326aa7ae6ba6c367fbd389,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:23.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9828" for this suite. • [SLOW TEST:14.109 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:16.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:24.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7416" for this suite. • [SLOW TEST:8.058 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0513 21:58:09.316189 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.316: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.318: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-65ece270-582d-4925-bd83-9a1e8b3df8db STEP: Creating a pod to test consume secrets May 13 21:58:09.336: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7" in namespace "projected-2430" to be "Succeeded or Failed" May 13 21:58:09.338: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344237ms May 13 21:58:11.343: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007354946s May 13 21:58:13.347: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011389966s May 13 21:58:15.350: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014257766s May 13 21:58:17.354: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018102252s May 13 21:58:19.358: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022212773s May 13 21:58:21.362: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025582404s May 13 21:58:23.364: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028251359s May 13 21:58:25.369: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.032739541s STEP: Saw pod success May 13 21:58:25.369: INFO: Pod "pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7" satisfied condition "Succeeded or Failed" May 13 21:58:25.371: INFO: Trying to get logs from node node2 pod pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7 container projected-secret-volume-test: STEP: delete the pod May 13 21:58:25.384: INFO: Waiting for pod pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7 to disappear May 13 21:58:25.385: INFO: Pod pod-projected-secrets-7107a2df-0dd4-4ffc-9e26-8026ed4a37c7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:25.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2430" for this suite. • [SLOW TEST:16.115 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:23.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:23.430: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 13 21:58:25.454: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:26.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9947" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W0513 21:58:09.342409 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.342: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.344: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-364.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-364.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-364.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-364.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 21:58:23.377: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.380: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.386: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.395: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.398: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.402: INFO: Unable to read jessie_udp@dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.405: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-364.svc.cluster.local from pod dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63: the server could not find the requested resource (get pods dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63) May 13 21:58:23.410: INFO: Lookups using dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-364.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local jessie_udp@dns-test-service-2.dns-364.svc.cluster.local jessie_tcp@dns-test-service-2.dns-364.svc.cluster.local] May 13 21:58:28.440: INFO: DNS probes using dns-364/dns-test-63002827-86ae-4d9d-b0b5-b3c7f2c22f63 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:28.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-364" for this suite. • [SLOW TEST:19.157 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller W0513 21:58:09.303263 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.303: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.305: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:31.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8232" for this suite. • [SLOW TEST:21.802 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:26.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:26.566: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2" in namespace "security-context-test-6364" to be "Succeeded or Failed" May 13 21:58:26.571: INFO: Pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164247ms May 13 21:58:28.573: INFO: Pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006782297s May 13 21:58:30.576: INFO: Pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009797453s May 13 21:58:32.579: INFO: Pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012353685s May 13 21:58:32.579: INFO: Pod "busybox-readonly-false-56a27486-a88f-4322-9680-a44fb45d7fa2" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:32.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6364" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:31.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-7bbd5470-0cc1-4be7-9729-42414c04f956 STEP: Creating a pod to test consume configMaps May 13 21:58:31.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338" in namespace "configmap-3154" to be "Succeeded or Failed" May 13 21:58:31.135: INFO: Pod "pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146519ms May 13 21:58:33.139: INFO: Pod "pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005376524s May 13 21:58:35.143: INFO: Pod "pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009715061s STEP: Saw pod success May 13 21:58:35.143: INFO: Pod "pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338" satisfied condition "Succeeded or Failed" May 13 21:58:35.145: INFO: Trying to get logs from node node1 pod pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338 container agnhost-container: STEP: delete the pod May 13 21:58:35.158: INFO: Waiting for pod pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338 to disappear May 13 21:58:35.160: INFO: Pod pod-configmaps-be6d94e1-0949-467d-a887-d16ee6c67338 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:35.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3154" for this suite. • ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:19.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:19.549: INFO: The status of Pod server-envvars-0d2b1583-9a18-443f-b531-532843597909 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:21.553: INFO: The status of Pod server-envvars-0d2b1583-9a18-443f-b531-532843597909 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:23.553: INFO: The status of Pod server-envvars-0d2b1583-9a18-443f-b531-532843597909 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:25.554: INFO: The status of Pod server-envvars-0d2b1583-9a18-443f-b531-532843597909 is Running (Ready = true) May 13 21:58:25.572: INFO: Waiting up to 5m0s for pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c" in namespace "pods-9269" to be "Succeeded or Failed" May 13 21:58:25.575: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676194ms May 13 21:58:27.579: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006581239s May 13 21:58:29.583: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010508976s May 13 21:58:31.588: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015941789s May 13 21:58:33.593: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020328251s May 13 21:58:35.597: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024337819s STEP: Saw pod success May 13 21:58:35.597: INFO: Pod "client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c" satisfied condition "Succeeded or Failed" May 13 21:58:35.599: INFO: Trying to get logs from node node2 pod client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c container env3cont: STEP: delete the pod May 13 21:58:35.610: INFO: Waiting for pod client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c to disappear May 13 21:58:35.612: INFO: Pod client-envvars-20ac0ac8-e647-4ea0-ad77-0b414829024c no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:35.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9269" for this suite. • [SLOW TEST:16.106 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi W0513 21:58:09.306354 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.306: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.308: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 13 21:58:09.312: INFO: >>> kubeConfig: /root/.kube/config May 13 21:58:17.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:36.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9388" for this suite. • [SLOW TEST:26.789 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:36.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 13 21:58:36.113: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 13 21:58:36.116: INFO: starting watch STEP: patching STEP: updating May 13 21:58:36.126: INFO: waiting for watch events with expected annotations May 13 21:58:36.126: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:36.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8684" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:35.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:35.196: INFO: Got root ca configmap in namespace "svcaccounts-505" May 13 21:58:35.199: INFO: Deleted root ca configmap in namespace "svcaccounts-505" STEP: waiting for a new root ca configmap created May 13 21:58:35.703: INFO: Recreated root ca configmap in namespace "svcaccounts-505" May 13 21:58:35.705: INFO: Updated root ca configmap in namespace "svcaccounts-505" STEP: waiting for the root ca configmap reconciled May 13 21:58:36.209: INFO: Reconciled root ca configmap in namespace "svcaccounts-505" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:36.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-505" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:24.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-91b8491c-b69e-4417-93ed-449799630192 STEP: Creating a pod to test consume configMaps May 13 21:58:24.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202" in namespace "projected-7464" to be "Succeeded or Failed" May 13 21:58:24.962: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 1.87281ms May 13 21:58:26.965: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005403564s May 13 21:58:28.971: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010650322s May 13 21:58:30.975: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014537864s May 13 21:58:32.977: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017232975s May 13 21:58:34.980: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020030507s May 13 21:58:36.984: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.023933625s STEP: Saw pod success May 13 21:58:36.984: INFO: Pod "pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202" satisfied condition "Succeeded or Failed" May 13 21:58:36.986: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202 container agnhost-container: STEP: delete the pod May 13 21:58:37.000: INFO: Waiting for pod pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202 to disappear May 13 21:58:37.002: INFO: Pod pod-projected-configmaps-44d39823-d047-451a-b47f-f11ce147f202 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:37.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7464" for this suite. • [SLOW TEST:12.087 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":124,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:25.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c40bcae9-a4cc-46f7-9e6d-4c2dfccced37 STEP: Creating a pod to test consume configMaps May 13 21:58:25.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed" in namespace "projected-4457" to be "Succeeded or Failed" May 13 21:58:25.483: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072747ms May 13 21:58:27.485: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006580807s May 13 21:58:29.488: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009572321s May 13 21:58:31.493: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013811777s May 13 21:58:33.496: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016874663s May 13 21:58:35.500: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021603462s May 13 21:58:37.504: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.024808551s STEP: Saw pod success May 13 21:58:37.504: INFO: Pod "pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed" satisfied condition "Succeeded or Failed" May 13 21:58:37.507: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed container projected-configmap-volume-test: STEP: delete the pod May 13 21:58:37.518: INFO: Waiting for pod pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed to disappear May 13 21:58:37.521: INFO: Pod pod-projected-configmaps-fdef05dc-db43-4969-ae54-d913ed5568ed no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:37.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4457" for this suite. • [SLOW TEST:12.089 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:32.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:58:32.700: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1" in namespace "security-context-test-4335" to be "Succeeded or Failed" May 13 21:58:32.703: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625524ms May 13 21:58:34.707: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00655448s May 13 21:58:36.712: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011696862s May 13 21:58:38.718: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0177149s May 13 21:58:40.724: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023712897s May 13 21:58:40.724: INFO: Pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1" satisfied condition "Succeeded or Failed" May 13 21:58:40.728: INFO: Got logs for pod "busybox-privileged-false-49fe2879-5d47-4af3-8a95-cd99a75595c1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:40.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4335" for this suite. • [SLOW TEST:8.069 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:37.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ee084d91-6807-4489-9049-4023aafbd412 STEP: Creating the pod May 13 21:58:37.083: INFO: The status of Pod pod-projected-configmaps-e5873368-380f-40c7-a6de-424eae47fb57 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:39.086: INFO: The status of Pod pod-projected-configmaps-e5873368-380f-40c7-a6de-424eae47fb57 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:41.087: INFO: The status of Pod pod-projected-configmaps-e5873368-380f-40c7-a6de-424eae47fb57 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-ee084d91-6807-4489-9049-4023aafbd412 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:43.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8662" for this suite. • [SLOW TEST:6.155 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":134,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:43.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-bfbfa191-6164-4539-9722-4baa6ae5f0cd STEP: Creating a pod to test consume configMaps May 13 21:58:43.231: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde" in namespace "configmap-6276" to be "Succeeded or Failed" May 13 21:58:43.233: INFO: Pod "pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.674069ms May 13 21:58:45.237: INFO: Pod "pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006102565s May 13 21:58:47.241: INFO: Pod "pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010351202s STEP: Saw pod success May 13 21:58:47.241: INFO: Pod "pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde" satisfied condition "Succeeded or Failed" May 13 21:58:47.244: INFO: Trying to get logs from node node1 pod pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde container agnhost-container: STEP: delete the pod May 13 21:58:47.256: INFO: Waiting for pod pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde to disappear May 13 21:58:47.258: INFO: Pod pod-configmaps-c6e93041-c724-4c12-aae2-9336476d2fde no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:47.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6276" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:40.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics May 13 21:58:50.939: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 21:58:51.066: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:58:51.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-431" for this suite. • [SLOW TEST:10.196 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":5,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:28.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-1013 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 21:58:28.528: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 21:58:28.558: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:30.561: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:32.563: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:34.561: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:36.563: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:38.564: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:40.565: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:42.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:44.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:46.563: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:48.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 21:58:50.561: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 21:58:50.566: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 21:59:00.599: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 13 21:59:00.599: INFO: Going to poll 10.244.3.156 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 13 21:59:00.601: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.156 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1013 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 21:59:00.601: INFO: >>> kubeConfig: /root/.kube/config May 13 21:59:01.708: INFO: Found all 1 expected endpoints: [netserver-0] May 13 21:59:01.708: INFO: Going to poll 10.244.4.224 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 13 21:59:01.710: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.224 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1013 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 21:59:01.710: INFO: >>> kubeConfig: /root/.kube/config May 13 21:59:02.806: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:02.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1013" for this suite. • [SLOW TEST:34.309 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":47,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:47.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 May 13 21:58:47.387: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. May 13 21:58:47.725: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 13 21:58:49.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:51.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:53.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:55.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:58:57.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:59:01.773: INFO: Waited 2.006005253s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices May 13 21:59:02.174: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:02.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7810" for this suite. • [SLOW TEST:15.704 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":7,"skipped":179,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:37.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-d62l STEP: Creating a pod to test atomic-volume-subpath May 13 21:58:37.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d62l" in namespace "subpath-7687" to be "Succeeded or Failed" May 13 21:58:37.598: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.837435ms May 13 21:58:39.601: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006825971s May 13 21:58:41.605: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010404361s May 13 21:58:43.609: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014672353s May 13 21:58:45.613: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 8.019116636s May 13 21:58:47.616: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 10.021569659s May 13 21:58:49.619: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 12.024630342s May 13 21:58:51.623: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 14.028218163s May 13 21:58:53.626: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 16.032046297s May 13 21:58:55.631: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 18.036605052s May 13 21:58:57.635: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 20.040635072s May 13 21:58:59.638: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 22.043712794s May 13 21:59:01.642: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Running", Reason="", readiness=true. Elapsed: 24.048181447s May 13 21:59:03.646: INFO: Pod "pod-subpath-test-configmap-d62l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052146505s STEP: Saw pod success May 13 21:59:03.646: INFO: Pod "pod-subpath-test-configmap-d62l" satisfied condition "Succeeded or Failed" May 13 21:59:03.649: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-d62l container test-container-subpath-configmap-d62l: STEP: delete the pod May 13 21:59:03.665: INFO: Waiting for pod pod-subpath-test-configmap-d62l to disappear May 13 21:59:03.667: INFO: Pod pod-subpath-test-configmap-d62l no longer exists STEP: Deleting pod pod-subpath-test-configmap-d62l May 13 21:59:03.667: INFO: Deleting pod "pod-subpath-test-configmap-d62l" in namespace "subpath-7687" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:03.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7687" for this suite. • [SLOW TEST:26.131 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:51.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod May 13 21:58:51.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 create -f -' May 13 21:58:51.540: INFO: stderr: "" May 13 21:58:51.540: INFO: stdout: "pod/pause created\n" May 13 21:58:51.540: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 13 21:58:51.540: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6884" to be "running and ready" May 13 21:58:51.542: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302003ms May 13 21:58:53.546: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006578657s May 13 21:58:55.552: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011905796s May 13 21:58:57.556: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015693135s May 13 21:58:59.559: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019040335s May 13 21:59:01.561: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021588566s May 13 21:59:03.566: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.026373939s May 13 21:59:03.566: INFO: Pod "pause" satisfied condition "running and ready" May 13 21:59:03.566: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod May 13 21:59:03.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 label pods pause testing-label=testing-label-value' May 13 21:59:03.744: INFO: stderr: "" May 13 21:59:03.744: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 13 21:59:03.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 get pod pause -L testing-label' May 13 21:59:03.920: INFO: stderr: "" May 13 21:59:03.920: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s testing-label-value\n" STEP: removing the label testing-label of a pod May 13 21:59:03.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 label pods pause testing-label-' May 13 21:59:04.109: INFO: stderr: "" May 13 21:59:04.109: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 13 21:59:04.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 get pod pause -L testing-label' May 13 21:59:04.288: INFO: stderr: "" May 13 21:59:04.288: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 13s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources May 13 21:59:04.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 delete --grace-period=0 --force -f -' May 13 21:59:04.437: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 21:59:04.437: INFO: stdout: "pod \"pause\" force deleted\n" May 13 21:59:04.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 get rc,svc -l name=pause --no-headers' May 13 21:59:04.641: INFO: stderr: "No resources found in kubectl-6884 namespace.\n" May 13 21:59:04.641: INFO: stdout: "" May 13 21:59:04.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6884 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 21:59:04.817: INFO: stderr: "" May 13 21:59:04.817: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:04.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6884" for this suite. • [SLOW TEST:13.700 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":6,"skipped":156,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:02.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 13 21:59:02.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8058 create -f -' May 13 21:59:03.222: INFO: stderr: "" May 13 21:59:03.222: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 13 21:59:04.225: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:04.225: INFO: Found 0 / 1 May 13 21:59:05.226: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:05.226: INFO: Found 0 / 1 May 13 21:59:06.227: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:06.227: INFO: Found 0 / 1 May 13 21:59:07.226: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:07.226: INFO: Found 1 / 1 May 13 21:59:07.226: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 13 21:59:07.228: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:07.228: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 21:59:07.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8058 patch pod agnhost-primary-5z7c6 -p {"metadata":{"annotations":{"x":"y"}}}' May 13 21:59:07.399: INFO: stderr: "" May 13 21:59:07.399: INFO: stdout: "pod/agnhost-primary-5z7c6 patched\n" STEP: checking annotations May 13 21:59:07.402: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:07.402: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:07.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8058" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:03.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-a6d5a436-b51b-4597-9dfc-3d869b55e8f2 STEP: Creating a pod to test consume configMaps May 13 21:59:03.108: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d" in namespace "configmap-4337" to be "Succeeded or Failed" May 13 21:59:03.111: INFO: Pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145701ms May 13 21:59:05.113: INFO: Pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004722095s May 13 21:59:07.116: INFO: Pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007671556s May 13 21:59:09.120: INFO: Pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011021325s STEP: Saw pod success May 13 21:59:09.120: INFO: Pod "pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d" satisfied condition "Succeeded or Failed" May 13 21:59:09.122: INFO: Trying to get logs from node node2 pod pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d container agnhost-container: STEP: delete the pod May 13 21:59:09.138: INFO: Waiting for pod pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d to disappear May 13 21:59:09.140: INFO: Pod pod-configmaps-ba91e3bc-d03c-4a79-8c22-c04a7dced00d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:09.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4337" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":180,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:13.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3669, will wait for the garbage collector to delete the pods May 13 21:58:29.498: INFO: Deleting Job.batch foo took: 3.710484ms May 13 21:58:29.599: INFO: Terminating Job.batch foo pods took: 100.351616ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:12.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3669" for this suite. • [SLOW TEST:59.002 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:07.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs May 13 21:59:07.455: INFO: Waiting up to 5m0s for pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2" in namespace "emptydir-2778" to be "Succeeded or Failed" May 13 21:59:07.457: INFO: Pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.814206ms May 13 21:59:09.462: INFO: Pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006510052s May 13 21:59:11.466: INFO: Pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010994645s May 13 21:59:13.470: INFO: Pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014669165s STEP: Saw pod success May 13 21:59:13.470: INFO: Pod "pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2" satisfied condition "Succeeded or Failed" May 13 21:59:13.473: INFO: Trying to get logs from node node2 pod pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2 container test-container: STEP: delete the pod May 13 21:59:13.484: INFO: Waiting for pod pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2 to disappear May 13 21:59:13.486: INFO: Pod pod-10754ad4-b7d1-4cf9-b471-3f8c2456c9d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:13.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2778" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:13.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 13 21:59:13.532: INFO: namespace kubectl-6623 May 13 21:59:13.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 create -f -' May 13 21:59:13.883: INFO: stderr: "" May 13 21:59:13.883: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 13 21:59:14.889: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:14.889: INFO: Found 0 / 1 May 13 21:59:15.889: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:15.889: INFO: Found 0 / 1 May 13 21:59:16.888: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:16.888: INFO: Found 1 / 1 May 13 21:59:16.888: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 13 21:59:16.890: INFO: Selector matched 1 pods for map[app:agnhost] May 13 21:59:16.890: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 21:59:16.890: INFO: wait on agnhost-primary startup in kubectl-6623 May 13 21:59:16.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 logs agnhost-primary-9zdvg agnhost-primary' May 13 21:59:17.067: INFO: stderr: "" May 13 21:59:17.067: INFO: stdout: "Paused\n" STEP: exposing RC May 13 21:59:17.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 13 21:59:17.276: INFO: stderr: "" May 13 21:59:17.276: INFO: stdout: "service/rm2 exposed\n" May 13 21:59:17.279: INFO: Service rm2 in namespace kubectl-6623 found. STEP: exposing service May 13 21:59:19.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6623 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 13 21:59:19.493: INFO: stderr: "" May 13 21:59:19.493: INFO: stdout: "service/rm3 exposed\n" May 13 21:59:19.496: INFO: Service rm3 in namespace kubectl-6623 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:21.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6623" for this suite. • [SLOW TEST:8.003 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":5,"skipped":60,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:21.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-0a3016e4-defd-4e8e-9c9d-f6b20217371f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:21.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1939" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:21.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server May 13 21:59:21.657: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-118 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:21.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-118" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":7,"skipped":82,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:12.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 13 21:59:12.477: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33106 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 13 21:59:12.478: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33107 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 21:59:12.478: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33108 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 13 21:59:22.501: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33364 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 21:59:22.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33365 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 21:59:22.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5302 3d1d8cae-43a8-4c56-b048-aecdaaa9287d 33366 0 2022-05-13 21:59:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-13 21:59:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:22.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5302" for this suite. • [SLOW TEST:10.069 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:09.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready May 13 21:59:09.186: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.186: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.189: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.189: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.195: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.195: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.211: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:09.211: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 13 21:59:13.724: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 13 21:59:13.724: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 13 21:59:13.748: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment May 13 21:59:13.754: INFO: observed event type ADDED STEP: waiting for Replicas to scale May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.755: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 0 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.756: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.758: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.758: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.764: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.764: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:13.770: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:13.770: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:13.777: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:13.777: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:17.132: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:17.132: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:17.144: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 STEP: listing Deployments May 13 21:59:17.147: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment May 13 21:59:17.158: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 STEP: fetching the DeploymentStatus May 13 21:59:17.166: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:17.166: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:17.169: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:17.176: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:17.182: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:21.152: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:21.162: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:21.173: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:21.181: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 13 21:59:24.343: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 1 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 3 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 2 May 13 21:59:24.366: INFO: observed Deployment test-deployment in namespace deployment-6047 with ReadyReplicas 3 STEP: deleting the Deployment May 13 21:59:24.372: INFO: observed event type MODIFIED May 13 21:59:24.372: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED May 13 21:59:24.373: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 21:59:24.376: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:24.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6047" for this suite. • [SLOW TEST:15.230 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":9,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:21.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 21:59:21.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d" in namespace "downward-api-9584" to be "Succeeded or Failed" May 13 21:59:21.831: INFO: Pod "downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332023ms May 13 21:59:23.834: INFO: Pod "downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005071382s May 13 21:59:25.839: INFO: Pod "downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009921735s STEP: Saw pod success May 13 21:59:25.839: INFO: Pod "downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d" satisfied condition "Succeeded or Failed" May 13 21:59:25.841: INFO: Trying to get logs from node node1 pod downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d container client-container: STEP: delete the pod May 13 21:59:25.853: INFO: Waiting for pod downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d to disappear May 13 21:59:25.856: INFO: Pod downwardapi-volume-962e1363-43f2-4f29-9df7-d4e6c753117d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:25.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9584" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:24.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath May 13 21:59:24.469: INFO: Waiting up to 5m0s for pod "var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b" in namespace "var-expansion-6700" to be "Succeeded or Failed" May 13 21:59:24.471: INFO: Pod "var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.735421ms May 13 21:59:26.474: INFO: Pod "var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389827s May 13 21:59:28.478: INFO: Pod "var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009115909s STEP: Saw pod success May 13 21:59:28.478: INFO: Pod "var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b" satisfied condition "Succeeded or Failed" May 13 21:59:28.480: INFO: Trying to get logs from node node1 pod var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b container dapi-container: STEP: delete the pod May 13 21:59:28.491: INFO: Waiting for pod var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b to disappear May 13 21:59:28.493: INFO: Pod var-expansion-e45ea944-94ca-4eda-94f3-a8d9980b311b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:28.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6700" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":10,"skipped":203,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:28.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 13 21:59:28.564: INFO: Waiting up to 5m0s for pod "pod-56763306-9c6f-420f-9ef1-34c169c41541" in namespace "emptydir-460" to be "Succeeded or Failed" May 13 21:59:28.567: INFO: Pod "pod-56763306-9c6f-420f-9ef1-34c169c41541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210775ms May 13 21:59:30.571: INFO: Pod "pod-56763306-9c6f-420f-9ef1-34c169c41541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007042251s May 13 21:59:32.575: INFO: Pod "pod-56763306-9c6f-420f-9ef1-34c169c41541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010164168s STEP: Saw pod success May 13 21:59:32.575: INFO: Pod "pod-56763306-9c6f-420f-9ef1-34c169c41541" satisfied condition "Succeeded or Failed" May 13 21:59:32.577: INFO: Trying to get logs from node node1 pod pod-56763306-9c6f-420f-9ef1-34c169c41541 container test-container: STEP: delete the pod May 13 21:59:32.591: INFO: Waiting for pod pod-56763306-9c6f-420f-9ef1-34c169c41541 to disappear May 13 21:59:32.593: INFO: Pod pod-56763306-9c6f-420f-9ef1-34c169c41541 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:32.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-460" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:25.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 21:59:26.125: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 21:59:28.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 21:59:30.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075966, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 21:59:33.143: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:33.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9950" for this suite. STEP: Destroying namespace "webhook-9950-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.287 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:22.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-536 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-536 STEP: Deleting pre-stop pod May 13 21:59:35.650: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:35.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-536" for this suite. • [SLOW TEST:13.093 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:33.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 21:59:33.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7" in namespace "downward-api-9490" to be "Succeeded or Failed" May 13 21:59:33.291: INFO: Pod "downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.646035ms May 13 21:59:35.295: INFO: Pod "downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006826864s May 13 21:59:37.298: INFO: Pod "downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010685311s STEP: Saw pod success May 13 21:59:37.298: INFO: Pod "downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7" satisfied condition "Succeeded or Failed" May 13 21:59:37.301: INFO: Trying to get logs from node node2 pod downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7 container client-container: STEP: delete the pod May 13 21:59:37.316: INFO: Waiting for pod downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7 to disappear May 13 21:59:37.320: INFO: Pod downwardapi-volume-6520bafe-8e12-4693-8121-acdf11c9e4f7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:37.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9490" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:03.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:59:03.813: INFO: created pod May 13 21:59:03.813: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3640" to be "Succeeded or Failed" May 13 21:59:03.819: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.914975ms May 13 21:59:05.824: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010511594s May 13 21:59:07.828: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014576375s STEP: Saw pod success May 13 21:59:07.828: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" May 13 21:59:37.829: INFO: polling logs May 13 21:59:37.836: INFO: Pod logs: 2022/05/13 21:59:07 OK: Got token 2022/05/13 21:59:07 validating with in-cluster discovery 2022/05/13 21:59:07 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/05/13 21:59:07 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3640:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1652479744, NotBefore:1652479144, IssuedAt:1652479144, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3640", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"e4d7641b-a44b-41ec-98b8-cb03af634126"}}} 2022/05/13 21:59:07 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/05/13 21:59:07 OK: Validated signature on JWT 2022/05/13 21:59:07 OK: Got valid claims from token! 2022/05/13 21:59:07 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3640:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1652479744, NotBefore:1652479144, IssuedAt:1652479144, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3640", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"e4d7641b-a44b-41ec-98b8-cb03af634126"}}} May 13 21:59:37.836: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:37.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3640" for this suite. • [SLOW TEST:34.076 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":4,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:37.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image May 13 21:59:37.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3176 create -f -' May 13 21:59:38.285: INFO: stderr: "" May 13 21:59:38.285: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 13 21:59:38.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3176 diff -f -' May 13 21:59:38.599: INFO: rc: 1 May 13 21:59:38.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3176 delete -f -' May 13 21:59:38.730: INFO: stderr: "" May 13 21:59:38.730: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:38.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3176" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":95,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:32.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:39.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3490" for this suite. • [SLOW TEST:7.042 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":12,"skipped":226,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:39.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:39.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2332" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":13,"skipped":228,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0513 21:58:09.264039 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.264: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.267: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-8c6502d8-fcd2-446e-9a6e-f73cddb18de3 STEP: Creating configMap with name cm-test-opt-upd-d1d56fc9-31f0-489d-b03b-57f29554febf STEP: Creating the pod May 13 21:58:09.297: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:11.301: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:13.302: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:15.302: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:17.302: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:19.301: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:21.322: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:23.301: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:25.302: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:27.303: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Pending, waiting for it to be Running (with Ready = true) May 13 21:58:29.301: INFO: The status of Pod pod-projected-configmaps-ddc63637-5afc-4e31-b650-32e3397503f3 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-8c6502d8-fcd2-446e-9a6e-f73cddb18de3 STEP: Updating configmap cm-test-opt-upd-d1d56fc9-31f0-489d-b03b-57f29554febf STEP: Creating configMap with name cm-test-opt-create-b0f929ba-c724-4ae1-abe1-a47b65f48ab8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:41.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7259" for this suite. • [SLOW TEST:91.785 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:35.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-ce569618-a17c-4397-9113-97f799b00c2d STEP: Creating secret with name s-test-opt-upd-b2ccb19e-bc97-4da2-9f31-85548ee5ebea STEP: Creating the pod May 13 21:59:35.722: INFO: The status of Pod pod-projected-secrets-c364f445-6077-468b-bc42-1352ab10ab55 is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:37.725: INFO: The status of Pod pod-projected-secrets-c364f445-6077-468b-bc42-1352ab10ab55 is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:39.726: INFO: The status of Pod pod-projected-secrets-c364f445-6077-468b-bc42-1352ab10ab55 is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:41.728: INFO: The status of Pod pod-projected-secrets-c364f445-6077-468b-bc42-1352ab10ab55 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-ce569618-a17c-4397-9113-97f799b00c2d STEP: Updating secret s-test-opt-upd-b2ccb19e-bc97-4da2-9f31-85548ee5ebea STEP: Creating secret with name s-test-opt-create-7f983245-6dfa-48ee-b1d0-14be0d33317b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:45.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6967" for this suite. • [SLOW TEST:10.122 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:41.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created May 13 21:59:41.061: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:43.064: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:45.064: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:46.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7670" for this suite. • [SLOW TEST:5.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:45.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:59:46.259: INFO: Checking APIGroup: apiregistration.k8s.io May 13 21:59:46.260: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 13 21:59:46.260: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.260: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 13 21:59:46.260: INFO: Checking APIGroup: apps May 13 21:59:46.261: INFO: PreferredVersion.GroupVersion: apps/v1 May 13 21:59:46.261: INFO: Versions found [{apps/v1 v1}] May 13 21:59:46.261: INFO: apps/v1 matches apps/v1 May 13 21:59:46.261: INFO: Checking APIGroup: events.k8s.io May 13 21:59:46.262: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 13 21:59:46.262: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.262: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 13 21:59:46.262: INFO: Checking APIGroup: authentication.k8s.io May 13 21:59:46.263: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 13 21:59:46.263: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.263: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 13 21:59:46.263: INFO: Checking APIGroup: authorization.k8s.io May 13 21:59:46.264: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 13 21:59:46.264: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.264: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 13 21:59:46.264: INFO: Checking APIGroup: autoscaling May 13 21:59:46.264: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 13 21:59:46.264: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 13 21:59:46.264: INFO: autoscaling/v1 matches autoscaling/v1 May 13 21:59:46.264: INFO: Checking APIGroup: batch May 13 21:59:46.265: INFO: PreferredVersion.GroupVersion: batch/v1 May 13 21:59:46.265: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 13 21:59:46.265: INFO: batch/v1 matches batch/v1 May 13 21:59:46.265: INFO: Checking APIGroup: certificates.k8s.io May 13 21:59:46.266: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 13 21:59:46.266: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.266: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 13 21:59:46.266: INFO: Checking APIGroup: networking.k8s.io May 13 21:59:46.267: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 13 21:59:46.267: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.267: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 13 21:59:46.267: INFO: Checking APIGroup: extensions May 13 21:59:46.268: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 13 21:59:46.268: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 13 21:59:46.268: INFO: extensions/v1beta1 matches extensions/v1beta1 May 13 21:59:46.268: INFO: Checking APIGroup: policy May 13 21:59:46.269: INFO: PreferredVersion.GroupVersion: policy/v1 May 13 21:59:46.269: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] May 13 21:59:46.269: INFO: policy/v1 matches policy/v1 May 13 21:59:46.269: INFO: Checking APIGroup: rbac.authorization.k8s.io May 13 21:59:46.270: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 13 21:59:46.270: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.270: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 13 21:59:46.270: INFO: Checking APIGroup: storage.k8s.io May 13 21:59:46.270: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 13 21:59:46.270: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.270: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 13 21:59:46.270: INFO: Checking APIGroup: admissionregistration.k8s.io May 13 21:59:46.271: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 13 21:59:46.271: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.271: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 13 21:59:46.271: INFO: Checking APIGroup: apiextensions.k8s.io May 13 21:59:46.272: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 13 21:59:46.272: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.272: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 13 21:59:46.272: INFO: Checking APIGroup: scheduling.k8s.io May 13 21:59:46.273: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 13 21:59:46.273: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.273: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 13 21:59:46.273: INFO: Checking APIGroup: coordination.k8s.io May 13 21:59:46.273: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 13 21:59:46.273: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.273: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 13 21:59:46.273: INFO: Checking APIGroup: node.k8s.io May 13 21:59:46.274: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 May 13 21:59:46.274: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.274: INFO: node.k8s.io/v1 matches node.k8s.io/v1 May 13 21:59:46.274: INFO: Checking APIGroup: discovery.k8s.io May 13 21:59:46.275: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 May 13 21:59:46.275: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.275: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 May 13 21:59:46.275: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io May 13 21:59:46.275: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 May 13 21:59:46.275: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.276: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 May 13 21:59:46.276: INFO: Checking APIGroup: intel.com May 13 21:59:46.278: INFO: PreferredVersion.GroupVersion: intel.com/v1 May 13 21:59:46.278: INFO: Versions found [{intel.com/v1 v1}] May 13 21:59:46.278: INFO: intel.com/v1 matches intel.com/v1 May 13 21:59:46.278: INFO: Checking APIGroup: k8s.cni.cncf.io May 13 21:59:46.279: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 13 21:59:46.279: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 13 21:59:46.279: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 13 21:59:46.279: INFO: Checking APIGroup: monitoring.coreos.com May 13 21:59:46.280: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 May 13 21:59:46.280: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] May 13 21:59:46.280: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 May 13 21:59:46.280: INFO: Checking APIGroup: telemetry.intel.com May 13 21:59:46.281: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 May 13 21:59:46.281: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] May 13 21:59:46.281: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 May 13 21:59:46.281: INFO: Checking APIGroup: custom.metrics.k8s.io May 13 21:59:46.282: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 May 13 21:59:46.282: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] May 13 21:59:46.282: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:46.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8563" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:37.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 13 21:59:37.444: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:39.447: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:41.450: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 13 21:59:41.465: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:43.468: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 13 21:59:45.470: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 13 21:59:45.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 21:59:45.478: INFO: Pod pod-with-prestop-http-hook still exists May 13 21:59:47.478: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 21:59:47.482: INFO: Pod pod-with-prestop-http-hook still exists May 13 21:59:49.479: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 13 21:59:49.481: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2680" for this suite. • [SLOW TEST:12.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:46.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-8dec4dff-601e-4e3c-9b0b-04c5712f4846 STEP: Creating a pod to test consume secrets May 13 21:59:46.378: INFO: Waiting up to 5m0s for pod "pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7" in namespace "secrets-8715" to be "Succeeded or Failed" May 13 21:59:46.380: INFO: Pod "pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023809ms May 13 21:59:48.384: INFO: Pod "pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005697053s May 13 21:59:50.388: INFO: Pod "pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009833133s STEP: Saw pod success May 13 21:59:50.388: INFO: Pod "pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7" satisfied condition "Succeeded or Failed" May 13 21:59:50.390: INFO: Trying to get logs from node node1 pod pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7 container secret-volume-test: STEP: delete the pod May 13 21:59:50.416: INFO: Waiting for pod pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7 to disappear May 13 21:59:50.418: INFO: Pod pod-secrets-91c237e5-d1d1-4df9-91e6-ff93850526a7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:50.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8715" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":106,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:38.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 21:59:38.781: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 21:59:47.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8466 --namespace=crd-publish-openapi-8466 create -f -' May 13 21:59:47.929: INFO: stderr: "" May 13 21:59:47.929: INFO: stdout: "e2e-test-crd-publish-openapi-3825-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 13 21:59:47.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8466 --namespace=crd-publish-openapi-8466 delete e2e-test-crd-publish-openapi-3825-crds test-cr' May 13 21:59:48.109: INFO: stderr: "" May 13 21:59:48.109: INFO: stdout: "e2e-test-crd-publish-openapi-3825-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 13 21:59:48.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8466 --namespace=crd-publish-openapi-8466 apply -f -' May 13 21:59:48.464: INFO: stderr: "" May 13 21:59:48.464: INFO: stdout: "e2e-test-crd-publish-openapi-3825-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 13 21:59:48.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8466 --namespace=crd-publish-openapi-8466 delete e2e-test-crd-publish-openapi-3825-crds test-cr' May 13 21:59:48.621: INFO: stderr: "" May 13 21:59:48.621: INFO: stdout: "e2e-test-crd-publish-openapi-3825-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 13 21:59:48.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8466 explain e2e-test-crd-publish-openapi-3825-crds' May 13 21:59:48.961: INFO: stderr: "" May 13 21:59:48.961: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3825-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:52.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8466" for this suite. • [SLOW TEST:13.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":6,"skipped":103,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:46.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 21:59:57.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6413" for this suite. • [SLOW TEST:11.102 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:36.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0513 21:58:36.186969 39 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:00.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-890" for this suite. • [SLOW TEST:84.053 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:50.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9178.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9178.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9178.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9178.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9178.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9178.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:00:00.524: INFO: DNS probes using dns-9178/dns-test-e3810047-cae0-4160-8289-cd7ba016769d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:00.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9178" for this suite. • [SLOW TEST:10.094 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":117,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:57.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 13 21:59:57.242: INFO: Waiting up to 5m0s for pod "pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2" in namespace "emptydir-6955" to be "Succeeded or Failed" May 13 21:59:57.244: INFO: Pod "pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.824113ms May 13 21:59:59.248: INFO: Pod "pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005817277s May 13 22:00:01.253: INFO: Pod "pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011028965s STEP: Saw pod success May 13 22:00:01.253: INFO: Pod "pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2" satisfied condition "Succeeded or Failed" May 13 22:00:01.256: INFO: Trying to get logs from node node2 pod pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2 container test-container: STEP: delete the pod May 13 22:00:01.267: INFO: Waiting for pod pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2 to disappear May 13 22:00:01.270: INFO: Pod pod-5a9e5202-d77f-4684-a22d-16f7fb9f52c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:01.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6955" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:49.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 13 21:59:49.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 create -f -' May 13 21:59:49.943: INFO: stderr: "" May 13 21:59:49.943: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 21:59:49.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 21:59:50.120: INFO: stderr: "" May 13 21:59:50.120: INFO: stdout: "update-demo-nautilus-hqxx4 update-demo-nautilus-pv28l " May 13 21:59:50.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-hqxx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 21:59:50.284: INFO: stderr: "" May 13 21:59:50.284: INFO: stdout: "" May 13 21:59:50.284: INFO: update-demo-nautilus-hqxx4 is created but not running May 13 21:59:55.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 21:59:55.455: INFO: stderr: "" May 13 21:59:55.455: INFO: stdout: "update-demo-nautilus-hqxx4 update-demo-nautilus-pv28l " May 13 21:59:55.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-hqxx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 21:59:55.627: INFO: stderr: "" May 13 21:59:55.627: INFO: stdout: "" May 13 21:59:55.627: INFO: update-demo-nautilus-hqxx4 is created but not running May 13 22:00:00.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:00:00.788: INFO: stderr: "" May 13 22:00:00.788: INFO: stdout: "update-demo-nautilus-hqxx4 update-demo-nautilus-pv28l " May 13 22:00:00.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-hqxx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:00:00.950: INFO: stderr: "" May 13 22:00:00.950: INFO: stdout: "true" May 13 22:00:00.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-hqxx4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:00:01.101: INFO: stderr: "" May 13 22:00:01.101: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:00:01.101: INFO: validating pod update-demo-nautilus-hqxx4 May 13 22:00:01.105: INFO: got data: { "image": "nautilus.jpg" } May 13 22:00:01.105: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:00:01.105: INFO: update-demo-nautilus-hqxx4 is verified up and running May 13 22:00:01.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-pv28l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:00:01.294: INFO: stderr: "" May 13 22:00:01.294: INFO: stdout: "true" May 13 22:00:01.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods update-demo-nautilus-pv28l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:00:01.443: INFO: stderr: "" May 13 22:00:01.443: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:00:01.443: INFO: validating pod update-demo-nautilus-pv28l May 13 22:00:01.446: INFO: got data: { "image": "nautilus.jpg" } May 13 22:00:01.446: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:00:01.446: INFO: update-demo-nautilus-pv28l is verified up and running STEP: using delete to clean up resources May 13 22:00:01.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 delete --grace-period=0 --force -f -' May 13 22:00:01.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:00:01.578: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 13 22:00:01.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get rc,svc -l name=update-demo --no-headers' May 13 22:00:01.792: INFO: stderr: "No resources found in kubectl-6405 namespace.\n" May 13 22:00:01.792: INFO: stdout: "" May 13 22:00:01.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6405 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 22:00:01.956: INFO: stderr: "" May 13 22:00:01.956: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:01.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6405" for this suite. • [SLOW TEST:12.392 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":12,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:01.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 13 22:00:01.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4774 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' May 13 22:00:01.505: INFO: stderr: "" May 13 22:00:01.505: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 May 13 22:00:01.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4774 delete pods e2e-test-httpd-pod' May 13 22:00:05.509: INFO: stderr: "" May 13 22:00:05.509: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:05.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4774" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:02.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-6e82cece-a9f1-4db6-bf44-15729d18e373 STEP: Creating a pod to test consume configMaps May 13 22:00:02.057: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a" in namespace "configmap-8465" to be "Succeeded or Failed" May 13 22:00:02.063: INFO: Pod "pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.731556ms May 13 22:00:04.068: INFO: Pod "pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011159971s May 13 22:00:06.073: INFO: Pod "pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015496228s STEP: Saw pod success May 13 22:00:06.073: INFO: Pod "pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a" satisfied condition "Succeeded or Failed" May 13 22:00:06.075: INFO: Trying to get logs from node node2 pod pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a container configmap-volume-test: STEP: delete the pod May 13 22:00:06.085: INFO: Waiting for pod pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a to disappear May 13 22:00:06.088: INFO: Pod pod-configmaps-0ab5651c-3bd9-4bfa-b8bc-e506fcf2fb1a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:06.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8465" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":224,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:00.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container May 13 22:00:06.283: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2542 PodName:pod-sharedvolume-722f9e61-035f-458b-acb8-c9ca9fe963fe ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:00:06.283: INFO: >>> kubeConfig: /root/.kube/config May 13 22:00:06.406: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2542" for this suite. • [SLOW TEST:6.170 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:05.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 22:00:09.605: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:09.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6620" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:09.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:09.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8212" for this suite. • ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:06.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint May 13 22:00:06.162: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint May 13 22:00:08.173: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint May 13 22:00:10.185: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:12.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-5856" for this suite. • [SLOW TEST:6.069 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":14,"skipped":237,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:09.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 13 22:00:09.723: INFO: Waiting up to 5m0s for pod "pod-a9954430-f166-4ba3-82ed-edae08529041" in namespace "emptydir-3940" to be "Succeeded or Failed" May 13 22:00:09.725: INFO: Pod "pod-a9954430-f166-4ba3-82ed-edae08529041": Phase="Pending", Reason="", readiness=false. Elapsed: 1.915352ms May 13 22:00:11.728: INFO: Pod "pod-a9954430-f166-4ba3-82ed-edae08529041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004669318s May 13 22:00:13.731: INFO: Pod "pod-a9954430-f166-4ba3-82ed-edae08529041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008160766s STEP: Saw pod success May 13 22:00:13.731: INFO: Pod "pod-a9954430-f166-4ba3-82ed-edae08529041" satisfied condition "Succeeded or Failed" May 13 22:00:13.734: INFO: Trying to get logs from node node1 pod pod-a9954430-f166-4ba3-82ed-edae08529041 container test-container: STEP: delete the pod May 13 22:00:13.749: INFO: Waiting for pod pod-a9954430-f166-4ba3-82ed-edae08529041 to disappear May 13 22:00:13.750: INFO: Pod pod-a9954430-f166-4ba3-82ed-edae08529041 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:13.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3940" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:13.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-fdeeefd1-77f9-47b7-b2c4-021efd8ec8b7 STEP: Creating the pod May 13 22:00:13.841: INFO: The status of Pod pod-configmaps-b6bbbc8e-45af-4206-a9d2-5d3f2f281ef6 is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:15.844: INFO: The status of Pod pod-configmaps-b6bbbc8e-45af-4206-a9d2-5d3f2f281ef6 is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:17.845: INFO: The status of Pod pod-configmaps-b6bbbc8e-45af-4206-a9d2-5d3f2f281ef6 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-fdeeefd1-77f9-47b7-b2c4-021efd8ec8b7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:21.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9958" for this suite. • [SLOW TEST:8.094 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":70,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:12.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:00:12.548: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:00:14.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076012, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076012, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076012, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076012, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:00:17.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:17.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2923-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6319" for this suite. STEP: Destroying namespace "webhook-6319-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.469 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":15,"skipped":242,"failed":0} SSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:06.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-k7p4 STEP: Creating a pod to test atomic-volume-subpath May 13 22:00:06.457: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-k7p4" in namespace "subpath-7362" to be "Succeeded or Failed" May 13 22:00:06.460: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163709ms May 13 22:00:08.463: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005867523s May 13 22:00:10.469: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011688586s May 13 22:00:12.476: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 6.018239997s May 13 22:00:14.479: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 8.021502372s May 13 22:00:16.486: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 10.028699895s May 13 22:00:18.490: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 12.032947261s May 13 22:00:20.495: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 14.037616985s May 13 22:00:22.498: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 16.040398591s May 13 22:00:24.502: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 18.044175416s May 13 22:00:26.506: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048854219s May 13 22:00:28.511: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Running", Reason="", readiness=true. Elapsed: 22.053881421s May 13 22:00:30.515: INFO: Pod "pod-subpath-test-projected-k7p4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057946078s STEP: Saw pod success May 13 22:00:30.515: INFO: Pod "pod-subpath-test-projected-k7p4" satisfied condition "Succeeded or Failed" May 13 22:00:30.518: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-k7p4 container test-container-subpath-projected-k7p4: STEP: delete the pod May 13 22:00:30.531: INFO: Waiting for pod pod-subpath-test-projected-k7p4 to disappear May 13 22:00:30.533: INFO: Pod pod-subpath-test-projected-k7p4 no longer exists STEP: Deleting pod pod-subpath-test-projected-k7p4 May 13 22:00:30.533: INFO: Deleting pod "pod-subpath-test-projected-k7p4" in namespace "subpath-7362" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:30.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7362" for this suite. • [SLOW TEST:24.125 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:25.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:00:26.101: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:00:28.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076026, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076026, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076026, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076026, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:00:31.121: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:31.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5862" for this suite. STEP: Destroying namespace "webhook-5862-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.556 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":16,"skipped":255,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:21.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:32.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9606" for this suite. • [SLOW TEST:11.071 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:39.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 13 21:59:39.745: INFO: PodSpec: initContainers in spec.initContainers May 13 22:00:34.111: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1f38c636-6136-4b80-b78f-5af460f33cf5", GenerateName:"", Namespace:"init-container-5122", SelfLink:"", UID:"9b51f44c-1ae3-4249-b5fa-608236095197", ResourceVersion:"34985", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63788075979, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"745823231"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.176\"\n ],\n \"mac\": \"6a:8f:bb:d0:cb:b3\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.176\"\n ],\n \"mac\": \"6a:8f:bb:d0:cb:b3\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003960d50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003960d68)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003960d80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003960d98)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003960db0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003960dc8)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-f4ks4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004680180), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4ks4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4ks4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4ks4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003c36148), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003fdc5b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c361d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c361f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003c361f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003c361fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00464c1f0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075979, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075979, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075979, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788075979, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.176", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.176"}}, StartTime:(*v1.Time)(0xc003960df8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003fdc690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003fdc700)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://762c3c581d39b40d35560c7a3307cdc57bdc03a981f5f78ff630563084867ae1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004680220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004680200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003c3627f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:34.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5122" for this suite. • [SLOW TEST:54.396 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":232,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:33.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 13 22:00:33.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8608 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 13 22:00:33.231: INFO: stderr: "" May 13 22:00:33.231: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 13 22:00:33.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8608 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' May 13 22:00:33.679: INFO: stderr: "" May 13 22:00:33.679: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 13 22:00:33.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8608 delete pods e2e-test-httpd-pod' May 13 22:00:34.584: INFO: stderr: "" May 13 22:00:34.584: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:34.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8608" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":11,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:31.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:37.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5308" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":17,"skipped":259,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:34.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-5461/configmap-test-29e766ab-3da1-4ac1-ad0d-c84cea7d918e STEP: Creating a pod to test consume configMaps May 13 22:00:34.196: INFO: Waiting up to 5m0s for pod "pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9" in namespace "configmap-5461" to be "Succeeded or Failed" May 13 22:00:34.198: INFO: Pod "pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180263ms May 13 22:00:36.203: INFO: Pod "pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007114819s May 13 22:00:38.206: INFO: Pod "pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010498317s STEP: Saw pod success May 13 22:00:38.206: INFO: Pod "pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9" satisfied condition "Succeeded or Failed" May 13 22:00:38.209: INFO: Trying to get logs from node node2 pod pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9 container env-test: STEP: delete the pod May 13 22:00:38.223: INFO: Waiting for pod pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9 to disappear May 13 22:00:38.225: INFO: Pod pod-configmaps-f230b447-51b4-44af-b928-8413dddd85f9 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:38.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5461" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":247,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:34.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running May 13 22:00:36.714: INFO: running pods: 0 < 3 May 13 22:00:38.718: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:40.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8844" for this suite. • [SLOW TEST:6.084 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":12,"skipped":126,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:37.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 13 22:00:37.402: INFO: Waiting up to 5m0s for pod "pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e" in namespace "emptydir-7147" to be "Succeeded or Failed" May 13 22:00:37.404: INFO: Pod "pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.981883ms May 13 22:00:39.408: INFO: Pod "pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005746843s May 13 22:00:41.413: INFO: Pod "pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010172179s STEP: Saw pod success May 13 22:00:41.413: INFO: Pod "pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e" satisfied condition "Succeeded or Failed" May 13 22:00:41.415: INFO: Trying to get logs from node node2 pod pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e container test-container: STEP: delete the pod May 13 22:00:41.438: INFO: Waiting for pod pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e to disappear May 13 22:00:41.441: INFO: Pod pod-39d4e4e5-a10e-489c-b6ff-a45171a2214e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:41.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7147" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:30.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod May 13 22:00:30.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 13 22:00:30.761: INFO: stderr: "" May 13 22:00:30.761: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. May 13 22:00:30.761: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 13 22:00:30.762: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2175" to be "running and ready, or succeeded" May 13 22:00:30.764: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657971ms May 13 22:00:32.769: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007058538s May 13 22:00:34.771: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.009836054s May 13 22:00:34.771: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 13 22:00:34.772: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 13 22:00:34.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator' May 13 22:00:34.947: INFO: stderr: "" May 13 22:00:34.947: INFO: stdout: "I0513 22:00:33.306016 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/62h 282\nI0513 22:00:33.506569 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jnk 454\nI0513 22:00:33.706839 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/lkd 485\nI0513 22:00:33.906067 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/rjcn 285\nI0513 22:00:34.106370 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/cxnh 562\nI0513 22:00:34.306583 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/7xjj 338\nI0513 22:00:34.506885 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/w69h 293\nI0513 22:00:34.706100 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/7sl4 542\nI0513 22:00:34.906463 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/gfw 599\n" STEP: limiting log lines May 13 22:00:34.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator --tail=1' May 13 22:00:35.098: INFO: stderr: "" May 13 22:00:35.098: INFO: stdout: "I0513 22:00:34.906463 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/gfw 599\n" May 13 22:00:35.098: INFO: got output "I0513 22:00:34.906463 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/gfw 599\n" STEP: limiting log bytes May 13 22:00:35.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator --limit-bytes=1' May 13 22:00:35.263: INFO: stderr: "" May 13 22:00:35.263: INFO: stdout: "I" May 13 22:00:35.263: INFO: got output "I" STEP: exposing timestamps May 13 22:00:35.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator --tail=1 --timestamps' May 13 22:00:35.445: INFO: stderr: "" May 13 22:00:35.445: INFO: stdout: "2022-05-13T22:00:35.307188275Z I0513 22:00:35.306973 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/6xzf 325\n" May 13 22:00:35.445: INFO: got output "2022-05-13T22:00:35.307188275Z I0513 22:00:35.306973 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/6xzf 325\n" STEP: restricting to a time range May 13 22:00:37.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator --since=1s' May 13 22:00:38.140: INFO: stderr: "" May 13 22:00:38.140: INFO: stdout: "I0513 22:00:37.306116 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/jcw6 490\nI0513 22:00:37.506433 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/8ptq 555\nI0513 22:00:37.706756 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/stt 340\nI0513 22:00:37.906033 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/8cph 243\nI0513 22:00:38.106327 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/cnjm 390\n" May 13 22:00:38.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 logs logs-generator logs-generator --since=24h' May 13 22:00:38.337: INFO: stderr: "" May 13 22:00:38.337: INFO: stdout: "I0513 22:00:33.306016 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/62h 282\nI0513 22:00:33.506569 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jnk 454\nI0513 22:00:33.706839 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/lkd 485\nI0513 22:00:33.906067 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/rjcn 285\nI0513 22:00:34.106370 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/cxnh 562\nI0513 22:00:34.306583 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/7xjj 338\nI0513 22:00:34.506885 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/w69h 293\nI0513 22:00:34.706100 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/7sl4 542\nI0513 22:00:34.906463 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/gfw 599\nI0513 22:00:35.106575 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/kbn 227\nI0513 22:00:35.306973 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/6xzf 325\nI0513 22:00:35.506234 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/cvk 240\nI0513 22:00:35.706501 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/t4w 328\nI0513 22:00:35.907000 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/5gxc 460\nI0513 22:00:36.106248 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/pml2 312\nI0513 22:00:36.306562 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/jrs 218\nI0513 22:00:36.506865 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/7vzm 229\nI0513 22:00:36.706100 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/h9z 594\nI0513 22:00:36.906480 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/csj4 359\nI0513 22:00:37.106850 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/mtr4 231\nI0513 22:00:37.306116 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/jcw6 490\nI0513 22:00:37.506433 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/8ptq 555\nI0513 22:00:37.706756 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/stt 340\nI0513 22:00:37.906033 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/8cph 243\nI0513 22:00:38.106327 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/cnjm 390\nI0513 22:00:38.306742 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/9xl9 397\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 May 13 22:00:38.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2175 delete pod logs-generator' May 13 22:00:42.967: INFO: stderr: "" May 13 22:00:42.967: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:42.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2175" for this suite. • [SLOW TEST:12.388 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:41.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:00:41.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f" in namespace "downward-api-9589" to be "Succeeded or Failed" May 13 22:00:41.547: INFO: Pod "downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.789384ms May 13 22:00:43.552: INFO: Pod "downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006536096s May 13 22:00:45.556: INFO: Pod "downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010542326s STEP: Saw pod success May 13 22:00:45.556: INFO: Pod "downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f" satisfied condition "Succeeded or Failed" May 13 22:00:45.559: INFO: Trying to get logs from node node2 pod downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f container client-container: STEP: delete the pod May 13 22:00:45.573: INFO: Waiting for pod downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f to disappear May 13 22:00:45.575: INFO: Pod downwardapi-volume-5b1c5e59-3504-4dcc-b3e6-409e69f47a4f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:45.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9589" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:38.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:00:38.608: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:00:40.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:00:42.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076038, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:00:45.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:45.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6538" for this suite. STEP: Destroying namespace "webhook-6538-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.451 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":16,"skipped":255,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:45.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:45.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7445" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":20,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:42.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 13 22:00:43.016: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6119" for this suite. • [SLOW TEST:6.971 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":7,"skipped":54,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:52.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:52.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7668" for this suite. • [SLOW TEST:60.043 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":110,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:36.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-4471 STEP: creating replication controller nodeport-test in namespace services-4471 I0513 21:58:36.254451 32 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4471, replica count: 2 I0513 21:58:39.304743 32 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:58:42.306158 32 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:58:45.307818 32 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 21:58:45.307: INFO: Creating new exec pod May 13 21:58:50.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' May 13 21:58:50.594: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 13 21:58:50.594: INFO: stdout: "nodeport-test-m8kmx" May 13 21:58:50.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.9.144 80' May 13 21:58:50.856: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.9.144 80\nConnection to 10.233.9.144 80 port [tcp/http] succeeded!\n" May 13 21:58:50.856: INFO: stdout: "nodeport-test-692vl" May 13 21:58:50.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:51.128: INFO: rc: 1 May 13 21:58:51.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:52.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:52.402: INFO: rc: 1 May 13 21:58:52.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:53.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:53.378: INFO: rc: 1 May 13 21:58:53.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:54.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:54.383: INFO: rc: 1 May 13 21:58:54.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:55.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:55.393: INFO: rc: 1 May 13 21:58:55.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:56.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:56.380: INFO: rc: 1 May 13 21:58:56.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:57.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:57.402: INFO: rc: 1 May 13 21:58:57.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:58.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:58.398: INFO: rc: 1 May 13 21:58:58.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:59.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:58:59.368: INFO: rc: 1 May 13 21:58:59.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31265 + echo hostName nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:00.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:00.472: INFO: rc: 1 May 13 21:59:00.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:01.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:01.553: INFO: rc: 1 May 13 21:59:01.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:02.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:02.368: INFO: rc: 1 May 13 21:59:02.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:03.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:03.370: INFO: rc: 1 May 13 21:59:03.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:04.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:04.589: INFO: rc: 1 May 13 21:59:04.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:05.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:05.615: INFO: rc: 1 May 13 21:59:05.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:06.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:06.405: INFO: rc: 1 May 13 21:59:06.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:07.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:07.416: INFO: rc: 1 May 13 21:59:07.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:08.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:08.512: INFO: rc: 1 May 13 21:59:08.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:09.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:10.673: INFO: rc: 1 May 13 21:59:10.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:11.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:11.401: INFO: rc: 1 May 13 21:59:11.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:12.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:12.629: INFO: rc: 1 May 13 21:59:12.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:13.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:13.429: INFO: rc: 1 May 13 21:59:13.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:14.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:14.470: INFO: rc: 1 May 13 21:59:14.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:15.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:15.606: INFO: rc: 1 May 13 21:59:15.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:16.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:16.386: INFO: rc: 1 May 13 21:59:16.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:17.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:17.615: INFO: rc: 1 May 13 21:59:17.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:18.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:18.486: INFO: rc: 1 May 13 21:59:18.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:19.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:19.746: INFO: rc: 1 May 13 21:59:19.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:20.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:20.379: INFO: rc: 1 May 13 21:59:20.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:21.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:21.376: INFO: rc: 1 May 13 21:59:21.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:22.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:22.530: INFO: rc: 1 May 13 21:59:22.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:23.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:23.573: INFO: rc: 1 May 13 21:59:23.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:24.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:24.581: INFO: rc: 1 May 13 21:59:24.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:25.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:25.445: INFO: rc: 1 May 13 21:59:25.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:26.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:26.445: INFO: rc: 1 May 13 21:59:26.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:27.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:27.382: INFO: rc: 1 May 13 21:59:27.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:28.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:28.387: INFO: rc: 1 May 13 21:59:28.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:29.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:29.386: INFO: rc: 1 May 13 21:59:29.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:30.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:30.381: INFO: rc: 1 May 13 21:59:30.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:31.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:31.375: INFO: rc: 1 May 13 21:59:31.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:32.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:32.373: INFO: rc: 1 May 13 21:59:32.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:33.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:33.399: INFO: rc: 1 May 13 21:59:33.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:34.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:34.616: INFO: rc: 1 May 13 21:59:34.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:35.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:35.383: INFO: rc: 1 May 13 21:59:35.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:36.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:36.579: INFO: rc: 1 May 13 21:59:36.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:37.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:37.549: INFO: rc: 1 May 13 21:59:37.549: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:38.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:38.384: INFO: rc: 1 May 13 21:59:38.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:39.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:39.406: INFO: rc: 1 May 13 21:59:39.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:40.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:40.560: INFO: rc: 1 May 13 21:59:40.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:41.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:41.388: INFO: rc: 1 May 13 21:59:41.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:42.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:42.465: INFO: rc: 1 May 13 21:59:42.466: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:43.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:43.383: INFO: rc: 1 May 13 21:59:43.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:44.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:44.370: INFO: rc: 1 May 13 21:59:44.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31265 + echo hostName nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:45.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:45.428: INFO: rc: 1 May 13 21:59:45.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:46.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:46.590: INFO: rc: 1 May 13 21:59:46.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:47.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:47.398: INFO: rc: 1 May 13 21:59:47.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:48.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:48.387: INFO: rc: 1 May 13 21:59:48.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:49.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:49.381: INFO: rc: 1 May 13 21:59:49.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:50.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:50.413: INFO: rc: 1 May 13 21:59:50.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:51.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:51.426: INFO: rc: 1 May 13 21:59:51.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:52.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:52.380: INFO: rc: 1 May 13 21:59:52.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:53.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:53.387: INFO: rc: 1 May 13 21:59:53.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:54.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:54.395: INFO: rc: 1 May 13 21:59:54.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:55.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:55.410: INFO: rc: 1 May 13 21:59:55.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:56.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:56.390: INFO: rc: 1 May 13 21:59:56.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:57.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:57.404: INFO: rc: 1 May 13 21:59:57.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:58.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:58.803: INFO: rc: 1 May 13 21:59:58.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31265 + echo hostName nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:59.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 21:59:59.394: INFO: rc: 1 May 13 21:59:59.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:00.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:00.511: INFO: rc: 1 May 13 22:00:00.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:01.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:01.400: INFO: rc: 1 May 13 22:00:01.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:02.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:02.462: INFO: rc: 1 May 13 22:00:02.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:03.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:03.466: INFO: rc: 1 May 13 22:00:03.466: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:04.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:04.526: INFO: rc: 1 May 13 22:00:04.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:05.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:05.434: INFO: rc: 1 May 13 22:00:05.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:06.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:06.395: INFO: rc: 1 May 13 22:00:06.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:07.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:07.392: INFO: rc: 1 May 13 22:00:07.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:08.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:08.390: INFO: rc: 1 May 13 22:00:08.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:09.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:09.417: INFO: rc: 1 May 13 22:00:09.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:10.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:10.400: INFO: rc: 1 May 13 22:00:10.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:11.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:11.365: INFO: rc: 1 May 13 22:00:11.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:12.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:12.403: INFO: rc: 1 May 13 22:00:12.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:13.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:13.386: INFO: rc: 1 May 13 22:00:13.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:14.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:14.452: INFO: rc: 1 May 13 22:00:14.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:15.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:15.466: INFO: rc: 1 May 13 22:00:15.466: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:16.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:16.381: INFO: rc: 1 May 13 22:00:16.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:17.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:17.404: INFO: rc: 1 May 13 22:00:17.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:18.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:18.382: INFO: rc: 1 May 13 22:00:18.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:19.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:19.373: INFO: rc: 1 May 13 22:00:19.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:20.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:20.419: INFO: rc: 1 May 13 22:00:20.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:21.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:21.382: INFO: rc: 1 May 13 22:00:21.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:22.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:22.406: INFO: rc: 1 May 13 22:00:22.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:23.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:23.388: INFO: rc: 1 May 13 22:00:23.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:24.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:24.376: INFO: rc: 1 May 13 22:00:24.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:25.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:25.430: INFO: rc: 1 May 13 22:00:25.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:26.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:26.405: INFO: rc: 1 May 13 22:00:26.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:27.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:27.870: INFO: rc: 1 May 13 22:00:27.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:28.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:28.376: INFO: rc: 1 May 13 22:00:28.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31265 + echo hostName nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:29.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:29.365: INFO: rc: 1 May 13 22:00:29.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:30.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:30.590: INFO: rc: 1 May 13 22:00:30.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:31.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:31.479: INFO: rc: 1 May 13 22:00:31.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:32.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:32.687: INFO: rc: 1 May 13 22:00:32.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:33.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:33.381: INFO: rc: 1 May 13 22:00:33.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:34.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:34.417: INFO: rc: 1 May 13 22:00:34.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:35.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:35.764: INFO: rc: 1 May 13 22:00:35.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:36.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:36.482: INFO: rc: 1 May 13 22:00:36.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:37.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:37.516: INFO: rc: 1 May 13 22:00:37.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:38.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:38.613: INFO: rc: 1 May 13 22:00:38.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:39.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:39.917: INFO: rc: 1 May 13 22:00:39.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:40.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:40.442: INFO: rc: 1 May 13 22:00:40.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:41.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:41.388: INFO: rc: 1 May 13 22:00:41.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:42.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:42.415: INFO: rc: 1 May 13 22:00:42.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:43.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:43.374: INFO: rc: 1 May 13 22:00:43.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:44.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:44.372: INFO: rc: 1 May 13 22:00:44.372: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:45.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:45.464: INFO: rc: 1 May 13 22:00:45.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:46.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:46.655: INFO: rc: 1 May 13 22:00:46.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:47.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:47.468: INFO: rc: 1 May 13 22:00:47.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:48.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:48.398: INFO: rc: 1 May 13 22:00:48.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:49.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:49.415: INFO: rc: 1 May 13 22:00:49.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:50.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:50.401: INFO: rc: 1 May 13 22:00:50.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:51.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:51.757: INFO: rc: 1 May 13 22:00:51.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:51.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265' May 13 22:00:52.021: INFO: rc: 1 May 13 22:00:52.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4471 exec execpodqk6f4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31265: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31265 nc: connect to 10.10.190.207 port 31265 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:52.022: FAIL: Unexpected error: <*errors.errorString | 0xc0047663a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31265 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31265 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00125b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00125b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00125b200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4471". STEP: Found 17 events. May 13 22:00:52.027: INFO: At 2022-05-13 21:58:36 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-692vl May 13 22:00:52.027: INFO: At 2022-05-13 21:58:36 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-m8kmx May 13 22:00:52.027: INFO: At 2022-05-13 21:58:36 +0000 UTC - event for nodeport-test-692vl: {default-scheduler } Scheduled: Successfully assigned services-4471/nodeport-test-692vl to node1 May 13 22:00:52.027: INFO: At 2022-05-13 21:58:36 +0000 UTC - event for nodeport-test-m8kmx: {default-scheduler } Scheduled: Successfully assigned services-4471/nodeport-test-m8kmx to node2 May 13 22:00:52.027: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for nodeport-test-692vl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:00:52.027: INFO: At 2022-05-13 21:58:39 +0000 UTC - event for nodeport-test-692vl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 352.405886ms May 13 22:00:52.027: INFO: At 2022-05-13 21:58:39 +0000 UTC - event for nodeport-test-692vl: {kubelet node1} Created: Created container nodeport-test May 13 22:00:52.027: INFO: At 2022-05-13 21:58:40 +0000 UTC - event for nodeport-test-692vl: {kubelet node1} Started: Started container nodeport-test May 13 22:00:52.027: INFO: At 2022-05-13 21:58:40 +0000 UTC - event for nodeport-test-m8kmx: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:00:52.027: INFO: At 2022-05-13 21:58:40 +0000 UTC - event for nodeport-test-m8kmx: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 273.865029ms May 13 22:00:52.027: INFO: At 2022-05-13 21:58:41 +0000 UTC - event for nodeport-test-m8kmx: {kubelet node2} Created: Created container nodeport-test May 13 22:00:52.027: INFO: At 2022-05-13 21:58:41 +0000 UTC - event for nodeport-test-m8kmx: {kubelet node2} Started: Started container nodeport-test May 13 22:00:52.027: INFO: At 2022-05-13 21:58:45 +0000 UTC - event for execpodqk6f4: {default-scheduler } Scheduled: Successfully assigned services-4471/execpodqk6f4 to node2 May 13 22:00:52.027: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpodqk6f4: {kubelet node2} Started: Started container agnhost-container May 13 22:00:52.027: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpodqk6f4: {kubelet node2} Created: Created container agnhost-container May 13 22:00:52.027: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpodqk6f4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:00:52.027: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpodqk6f4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 290.381999ms May 13 22:00:52.029: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:00:52.030: INFO: execpodqk6f4 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:45 +0000 UTC }] May 13 22:00:52.030: INFO: nodeport-test-692vl node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:36 +0000 UTC }] May 13 22:00:52.030: INFO: nodeport-test-m8kmx node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 21:58:36 +0000 UTC }] May 13 22:00:52.030: INFO: May 13 22:00:52.034: INFO: Logging node info for node master1 May 13 22:00:52.036: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 35211 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:41 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:41 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:41 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:41 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:00:52.037: INFO: Logging kubelet events for node master1 May 13 22:00:52.040: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:00:52.068: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:00:52.068: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:00:52.068: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:00:52.068: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:00:52.068: INFO: Init container install-cni ready: true, restart count 2 May 13 22:00:52.068: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:00:52.068: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-multus ready: true, restart count 1 May 13 22:00:52.068: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.068: INFO: Container docker-registry ready: true, restart count 0 May 13 22:00:52.068: INFO: Container nginx ready: true, restart count 0 May 13 22:00:52.068: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:00:52.068: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.068: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:00:52.068: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.068: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.068: INFO: Container node-exporter ready: true, restart count 0 May 13 22:00:52.163: INFO: Latency metrics for node master1 May 13 22:00:52.164: INFO: Logging node info for node master2 May 13 22:00:52.166: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 35808 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:00:52.166: INFO: Logging kubelet events for node master2 May 13 22:00:52.169: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:00:52.184: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container coredns ready: true, restart count 1 May 13 22:00:52.184: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:00:52.184: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:00:52.184: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:00:52.184: INFO: Init container install-cni ready: true, restart count 2 May 13 22:00:52.184: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:00:52.184: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-multus ready: true, restart count 1 May 13 22:00:52.184: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:00:52.184: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:00:52.184: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.184: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.184: INFO: Container node-exporter ready: true, restart count 0 May 13 22:00:52.274: INFO: Latency metrics for node master2 May 13 22:00:52.274: INFO: Logging node info for node master3 May 13 22:00:52.276: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 35613 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:50 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:50 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:50 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:50 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:00:52.277: INFO: Logging kubelet events for node master3 May 13 22:00:52.279: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:00:52.292: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container autoscaler ready: true, restart count 1 May 13 22:00:52.292: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.292: INFO: Container node-exporter ready: true, restart count 0 May 13 22:00:52.292: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:00:52.292: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:00:52.292: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:00:52.292: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:00:52.292: INFO: Init container install-cni ready: true, restart count 0 May 13 22:00:52.292: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:00:52.292: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:00:52.292: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container kube-multus ready: true, restart count 1 May 13 22:00:52.292: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.292: INFO: Container coredns ready: true, restart count 1 May 13 22:00:52.381: INFO: Latency metrics for node master3 May 13 22:00:52.381: INFO: Logging node info for node node1 May 13 22:00:52.385: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 35776 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:51 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:00:52.385: INFO: Logging kubelet events for node node1 May 13 22:00:52.388: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:00:52.405: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:00:52.405: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:00:52.405: INFO: Container discover ready: false, restart count 0 May 13 22:00:52.405: INFO: Container init ready: false, restart count 0 May 13 22:00:52.405: INFO: Container install ready: false, restart count 0 May 13 22:00:52.405: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:00:52.405: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:00:52.405: INFO: Container config-reloader ready: true, restart count 0 May 13 22:00:52.405: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:00:52.405: INFO: Container grafana ready: true, restart count 0 May 13 22:00:52.405: INFO: Container prometheus ready: true, restart count 1 May 13 22:00:52.405: INFO: affinity-nodeport-transition-np7sm started at 2022-05-13 21:58:35 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 13 22:00:52.405: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:00:52.405: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:00:52.405: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:00:52.405: INFO: Container collectd ready: true, restart count 0 May 13 22:00:52.405: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:00:52.405: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:00:52.405: INFO: affinity-nodeport-transition-rv2gq started at 2022-05-13 21:58:35 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 13 22:00:52.405: INFO: pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f started at 2022-05-13 22:00:00 +0000 UTC (0+3 container statuses recorded) May 13 22:00:52.405: INFO: Container createcm-volume-test ready: true, restart count 0 May 13 22:00:52.405: INFO: Container delcm-volume-test ready: true, restart count 0 May 13 22:00:52.405: INFO: Container updcm-volume-test ready: true, restart count 0 May 13 22:00:52.405: INFO: test-webserver-9e0d337d-5f26-42ce-a270-201e2d55dd29 started at 2022-05-13 21:59:52 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container test-webserver ready: false, restart count 0 May 13 22:00:52.405: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container kube-multus ready: true, restart count 1 May 13 22:00:52.405: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:00:52.405: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:00:52.405: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.405: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.405: INFO: Container node-exporter ready: true, restart count 0 May 13 22:00:52.405: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:00:52.405: INFO: Init container install-cni ready: true, restart count 2 May 13 22:00:52.405: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:00:52.405: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:00:52.405: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.405: INFO: Container nodereport ready: true, restart count 0 May 13 22:00:52.405: INFO: Container reconcile ready: true, restart count 0 May 13 22:00:52.405: INFO: nodeport-test-692vl started at 2022-05-13 21:58:36 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container nodeport-test ready: true, restart count 0 May 13 22:00:52.405: INFO: terminate-cmd-rpa484f349b-9332-4506-9e45-fd7d8121ea7c started at 2022-05-13 22:00:40 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.405: INFO: Container terminate-cmd-rpa ready: false, restart count 1 May 13 22:00:52.586: INFO: Latency metrics for node node1 May 13 22:00:52.586: INFO: Logging node info for node node2 May 13 22:00:52.588: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 35274 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:44 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:44 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:44 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:44 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:00:52.588: INFO: Logging kubelet events for node node2 May 13 22:00:52.590: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:00:52.608: INFO: affinity-nodeport-transition-swpvj started at 2022-05-13 21:58:35 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.608: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 13 22:00:52.609: INFO: liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 started at 2022-05-13 21:59:04 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container agnhost-container ready: false, restart count 4 May 13 22:00:52.609: INFO: svc-latency-rc-jp846 started at 2022-05-13 22:00:45 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container svc-latency-rc ready: true, restart count 0 May 13 22:00:52.609: INFO: downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8 started at 2022-05-13 22:00:50 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container client-container ready: false, restart count 0 May 13 22:00:52.609: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container kube-multus ready: true, restart count 1 May 13 22:00:52.609: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.609: INFO: Container nodereport ready: true, restart count 0 May 13 22:00:52.609: INFO: Container reconcile ready: true, restart count 0 May 13 22:00:52.609: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.609: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:00:52.609: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container tas-extender ready: true, restart count 0 May 13 22:00:52.609: INFO: sample-crd-conversion-webhook-deployment-697cdbd8f4-kwgl6 started at 2022-05-13 22:00:46 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 May 13 22:00:52.609: INFO: busybox-e33fcb59-7a0e-46aa-8f4a-abcb91fcfab7 started at 2022-05-13 21:58:09 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container busybox ready: true, restart count 0 May 13 22:00:52.609: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:00:52.609: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:00:52.609: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:00:52.609: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:00:52.609: INFO: Container discover ready: false, restart count 0 May 13 22:00:52.609: INFO: Container init ready: false, restart count 0 May 13 22:00:52.609: INFO: Container install ready: false, restart count 0 May 13 22:00:52.609: INFO: nodeport-test-m8kmx started at 2022-05-13 21:58:36 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container nodeport-test ready: true, restart count 0 May 13 22:00:52.609: INFO: execpodqk6f4 started at 2022-05-13 21:58:45 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:00:52.609: INFO: pod-init-731e75ad-1176-4ade-aa16-40e32ad94434 started at 2022-05-13 22:00:43 +0000 UTC (2+1 container statuses recorded) May 13 22:00:52.609: INFO: Init container init1 ready: true, restart count 0 May 13 22:00:52.609: INFO: Init container init2 ready: true, restart count 0 May 13 22:00:52.609: INFO: Container run1 ready: false, restart count 0 May 13 22:00:52.609: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:00:52.609: INFO: Init container install-cni ready: true, restart count 2 May 13 22:00:52.609: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:00:52.609: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:00:52.609: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:00:52.609: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:00:52.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:00:52.609: INFO: Container node-exporter ready: true, restart count 0 May 13 22:00:52.609: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:00:52.609: INFO: Container collectd ready: true, restart count 0 May 13 22:00:52.609: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:00:52.609: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:00:53.097: INFO: Latency metrics for node node2 May 13 22:00:53.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4471" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.883 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:52.022: Unexpected error: <*errors.errorString | 0xc0047663a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31265 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31265 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":3,"skipped":6,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:49.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:00:50.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8" in namespace "projected-3454" to be "Succeeded or Failed" May 13 22:00:50.025: INFO: Pod "downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31159ms May 13 22:00:52.028: INFO: Pod "downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005652032s May 13 22:00:54.031: INFO: Pod "downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008154331s STEP: Saw pod success May 13 22:00:54.031: INFO: Pod "downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8" satisfied condition "Succeeded or Failed" May 13 22:00:54.033: INFO: Trying to get logs from node node2 pod downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8 container client-container: STEP: delete the pod May 13 22:00:54.064: INFO: Waiting for pod downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8 to disappear May 13 22:00:54.066: INFO: Pod downwardapi-volume-d1a2aed5-d6c0-4234-b3f1-060b2aa969a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:54.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3454" for this suite. • ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:45.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:45.747: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3824 I0513 22:00:45.768884 27 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3824, replica count: 1 I0513 22:00:46.820790 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:00:47.821944 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:00:48.823205 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:00:49.824634 27 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:00:49.934: INFO: Created: latency-svc-9lbfg May 13 22:00:49.937: INFO: Got endpoints: latency-svc-9lbfg [12.699155ms] May 13 22:00:49.943: INFO: Created: latency-svc-49r9f May 13 22:00:49.947: INFO: Created: latency-svc-xnpvt May 13 22:00:49.947: INFO: Got endpoints: latency-svc-49r9f [9.881708ms] May 13 22:00:49.948: INFO: Got endpoints: latency-svc-xnpvt [9.970813ms] May 13 22:00:49.950: INFO: Created: latency-svc-n4vw9 May 13 22:00:49.952: INFO: Got endpoints: latency-svc-n4vw9 [14.321435ms] May 13 22:00:49.953: INFO: Created: latency-svc-9mgqz May 13 22:00:49.955: INFO: Created: latency-svc-2nxrt May 13 22:00:49.956: INFO: Got endpoints: latency-svc-9mgqz [17.906422ms] May 13 22:00:49.957: INFO: Got endpoints: latency-svc-2nxrt [19.446348ms] May 13 22:00:49.958: INFO: Created: latency-svc-z4c2x May 13 22:00:49.960: INFO: Got endpoints: latency-svc-z4c2x [22.414135ms] May 13 22:00:49.961: INFO: Created: latency-svc-2vfjf May 13 22:00:49.963: INFO: Got endpoints: latency-svc-2vfjf [25.315529ms] May 13 22:00:49.963: INFO: Created: latency-svc-nlgj6 May 13 22:00:49.966: INFO: Got endpoints: latency-svc-nlgj6 [28.362319ms] May 13 22:00:49.966: INFO: Created: latency-svc-n88fq May 13 22:00:49.968: INFO: Got endpoints: latency-svc-n88fq [30.029378ms] May 13 22:00:49.970: INFO: Created: latency-svc-xt225 May 13 22:00:49.972: INFO: Created: latency-svc-445zf May 13 22:00:49.972: INFO: Got endpoints: latency-svc-xt225 [34.195824ms] May 13 22:00:49.975: INFO: Got endpoints: latency-svc-445zf [36.586672ms] May 13 22:00:49.975: INFO: Created: latency-svc-tnx4f May 13 22:00:49.977: INFO: Got endpoints: latency-svc-tnx4f [39.430226ms] May 13 22:00:49.979: INFO: Created: latency-svc-z5rhx May 13 22:00:49.981: INFO: Got endpoints: latency-svc-z5rhx [42.550587ms] May 13 22:00:49.981: INFO: Created: latency-svc-2gwxf May 13 22:00:49.983: INFO: Got endpoints: latency-svc-2gwxf [45.233361ms] May 13 22:00:49.984: INFO: Created: latency-svc-97rkb May 13 22:00:49.986: INFO: Got endpoints: latency-svc-97rkb [48.40946ms] May 13 22:00:49.987: INFO: Created: latency-svc-67vcn May 13 22:00:49.990: INFO: Created: latency-svc-xkzz6 May 13 22:00:49.990: INFO: Got endpoints: latency-svc-67vcn [42.031995ms] May 13 22:00:49.991: INFO: Got endpoints: latency-svc-xkzz6 [43.772315ms] May 13 22:00:49.993: INFO: Created: latency-svc-gj298 May 13 22:00:49.995: INFO: Got endpoints: latency-svc-gj298 [42.713537ms] May 13 22:00:49.999: INFO: Created: latency-svc-lfdk6 May 13 22:00:49.999: INFO: Created: latency-svc-fczgb May 13 22:00:50.001: INFO: Got endpoints: latency-svc-lfdk6 [45.08698ms] May 13 22:00:50.001: INFO: Got endpoints: latency-svc-fczgb [44.070873ms] May 13 22:00:50.005: INFO: Created: latency-svc-jk9wt May 13 22:00:50.007: INFO: Got endpoints: latency-svc-jk9wt [46.774416ms] May 13 22:00:50.008: INFO: Created: latency-svc-f968w May 13 22:00:50.010: INFO: Got endpoints: latency-svc-f968w [46.645713ms] May 13 22:00:50.010: INFO: Created: latency-svc-h52s5 May 13 22:00:50.012: INFO: Got endpoints: latency-svc-h52s5 [46.055672ms] May 13 22:00:50.013: INFO: Created: latency-svc-rh6pk May 13 22:00:50.015: INFO: Got endpoints: latency-svc-rh6pk [47.083798ms] May 13 22:00:50.016: INFO: Created: latency-svc-66pvg May 13 22:00:50.019: INFO: Created: latency-svc-6xm6q May 13 22:00:50.019: INFO: Got endpoints: latency-svc-66pvg [46.462315ms] May 13 22:00:50.020: INFO: Got endpoints: latency-svc-6xm6q [45.593574ms] May 13 22:00:50.022: INFO: Created: latency-svc-6mgbv May 13 22:00:50.025: INFO: Created: latency-svc-cktqp May 13 22:00:50.025: INFO: Got endpoints: latency-svc-6mgbv [47.504998ms] May 13 22:00:50.027: INFO: Got endpoints: latency-svc-cktqp [46.385197ms] May 13 22:00:50.028: INFO: Created: latency-svc-x65jl May 13 22:00:50.030: INFO: Got endpoints: latency-svc-x65jl [47.03504ms] May 13 22:00:50.031: INFO: Created: latency-svc-bjqkk May 13 22:00:50.033: INFO: Got endpoints: latency-svc-bjqkk [46.442112ms] May 13 22:00:50.034: INFO: Created: latency-svc-8k7th May 13 22:00:50.036: INFO: Got endpoints: latency-svc-8k7th [46.246622ms] May 13 22:00:50.036: INFO: Created: latency-svc-cqnlx May 13 22:00:50.039: INFO: Created: latency-svc-492c7 May 13 22:00:50.042: INFO: Created: latency-svc-vvs6t May 13 22:00:50.044: INFO: Created: latency-svc-c2nk9 May 13 22:00:50.047: INFO: Created: latency-svc-drm8k May 13 22:00:50.050: INFO: Created: latency-svc-tpp8c May 13 22:00:50.052: INFO: Created: latency-svc-skxlh May 13 22:00:50.054: INFO: Created: latency-svc-rsf2w May 13 22:00:50.057: INFO: Created: latency-svc-4gzv7 May 13 22:00:50.059: INFO: Created: latency-svc-brdrd May 13 22:00:50.062: INFO: Created: latency-svc-rj9k2 May 13 22:00:50.065: INFO: Created: latency-svc-jrpld May 13 22:00:50.068: INFO: Created: latency-svc-whqmp May 13 22:00:50.071: INFO: Created: latency-svc-ck55v May 13 22:00:50.073: INFO: Created: latency-svc-m68kl May 13 22:00:50.087: INFO: Got endpoints: latency-svc-cqnlx [95.331537ms] May 13 22:00:50.092: INFO: Created: latency-svc-85bhr May 13 22:00:50.136: INFO: Got endpoints: latency-svc-492c7 [141.506714ms] May 13 22:00:50.141: INFO: Created: latency-svc-zl6bt May 13 22:00:50.187: INFO: Got endpoints: latency-svc-vvs6t [185.666621ms] May 13 22:00:50.192: INFO: Created: latency-svc-drgjd May 13 22:00:50.238: INFO: Got endpoints: latency-svc-c2nk9 [236.397538ms] May 13 22:00:50.244: INFO: Created: latency-svc-wfcw2 May 13 22:00:50.287: INFO: Got endpoints: latency-svc-drm8k [279.682086ms] May 13 22:00:50.293: INFO: Created: latency-svc-gnxw4 May 13 22:00:50.336: INFO: Got endpoints: latency-svc-tpp8c [326.492641ms] May 13 22:00:50.342: INFO: Created: latency-svc-d8hln May 13 22:00:50.387: INFO: Got endpoints: latency-svc-skxlh [374.894922ms] May 13 22:00:50.392: INFO: Created: latency-svc-sz7sx May 13 22:00:50.437: INFO: Got endpoints: latency-svc-rsf2w [421.488084ms] May 13 22:00:50.442: INFO: Created: latency-svc-dlwdw May 13 22:00:50.487: INFO: Got endpoints: latency-svc-4gzv7 [468.361675ms] May 13 22:00:50.493: INFO: Created: latency-svc-ntx5w May 13 22:00:50.537: INFO: Got endpoints: latency-svc-brdrd [516.77299ms] May 13 22:00:50.543: INFO: Created: latency-svc-9bdrn May 13 22:00:50.586: INFO: Got endpoints: latency-svc-rj9k2 [561.305927ms] May 13 22:00:50.591: INFO: Created: latency-svc-2wjs6 May 13 22:00:50.637: INFO: Got endpoints: latency-svc-jrpld [609.484073ms] May 13 22:00:50.643: INFO: Created: latency-svc-tvhvv May 13 22:00:50.688: INFO: Got endpoints: latency-svc-whqmp [657.495851ms] May 13 22:00:50.695: INFO: Created: latency-svc-wchgm May 13 22:00:50.739: INFO: Got endpoints: latency-svc-ck55v [705.88607ms] May 13 22:00:50.746: INFO: Created: latency-svc-tcqq4 May 13 22:00:50.786: INFO: Got endpoints: latency-svc-m68kl [749.970102ms] May 13 22:00:50.792: INFO: Created: latency-svc-mzbv6 May 13 22:00:50.836: INFO: Got endpoints: latency-svc-85bhr [749.563936ms] May 13 22:00:50.841: INFO: Created: latency-svc-gz4fk May 13 22:00:50.886: INFO: Got endpoints: latency-svc-zl6bt [749.322529ms] May 13 22:00:50.891: INFO: Created: latency-svc-fcpvc May 13 22:00:50.936: INFO: Got endpoints: latency-svc-drgjd [749.737984ms] May 13 22:00:50.942: INFO: Created: latency-svc-zg5wb May 13 22:00:50.987: INFO: Got endpoints: latency-svc-wfcw2 [748.890389ms] May 13 22:00:50.993: INFO: Created: latency-svc-lkzz4 May 13 22:00:51.037: INFO: Got endpoints: latency-svc-gnxw4 [749.829665ms] May 13 22:00:51.042: INFO: Created: latency-svc-nfvrb May 13 22:00:51.086: INFO: Got endpoints: latency-svc-d8hln [749.484709ms] May 13 22:00:51.092: INFO: Created: latency-svc-2shg6 May 13 22:00:51.137: INFO: Got endpoints: latency-svc-sz7sx [749.731294ms] May 13 22:00:51.142: INFO: Created: latency-svc-vmn72 May 13 22:00:51.187: INFO: Got endpoints: latency-svc-dlwdw [749.920756ms] May 13 22:00:51.192: INFO: Created: latency-svc-mx5zg May 13 22:00:51.237: INFO: Got endpoints: latency-svc-ntx5w [749.868298ms] May 13 22:00:51.243: INFO: Created: latency-svc-wbbkn May 13 22:00:51.286: INFO: Got endpoints: latency-svc-9bdrn [749.174422ms] May 13 22:00:51.291: INFO: Created: latency-svc-8rqph May 13 22:00:51.338: INFO: Got endpoints: latency-svc-2wjs6 [752.192335ms] May 13 22:00:51.344: INFO: Created: latency-svc-ctn85 May 13 22:00:51.386: INFO: Got endpoints: latency-svc-tvhvv [749.568809ms] May 13 22:00:51.392: INFO: Created: latency-svc-pfxwh May 13 22:00:51.437: INFO: Got endpoints: latency-svc-wchgm [749.389162ms] May 13 22:00:51.443: INFO: Created: latency-svc-mqj29 May 13 22:00:51.487: INFO: Got endpoints: latency-svc-tcqq4 [747.75993ms] May 13 22:00:51.492: INFO: Created: latency-svc-qppfm May 13 22:00:51.537: INFO: Got endpoints: latency-svc-mzbv6 [750.830625ms] May 13 22:00:51.543: INFO: Created: latency-svc-v5fjg May 13 22:00:51.586: INFO: Got endpoints: latency-svc-gz4fk [749.926309ms] May 13 22:00:51.592: INFO: Created: latency-svc-xzdqv May 13 22:00:51.637: INFO: Got endpoints: latency-svc-fcpvc [751.060966ms] May 13 22:00:51.643: INFO: Created: latency-svc-2dngp May 13 22:00:51.688: INFO: Got endpoints: latency-svc-zg5wb [751.203017ms] May 13 22:00:51.694: INFO: Created: latency-svc-frps9 May 13 22:00:51.737: INFO: Got endpoints: latency-svc-lkzz4 [749.951239ms] May 13 22:00:51.742: INFO: Created: latency-svc-gnth9 May 13 22:00:51.786: INFO: Got endpoints: latency-svc-nfvrb [749.524019ms] May 13 22:00:51.792: INFO: Created: latency-svc-vmqsr May 13 22:00:51.837: INFO: Got endpoints: latency-svc-2shg6 [750.55678ms] May 13 22:00:51.844: INFO: Created: latency-svc-c8wfx May 13 22:00:51.887: INFO: Got endpoints: latency-svc-vmn72 [749.663578ms] May 13 22:00:51.892: INFO: Created: latency-svc-9x7w4 May 13 22:00:51.937: INFO: Got endpoints: latency-svc-mx5zg [750.789048ms] May 13 22:00:51.945: INFO: Created: latency-svc-92j8w May 13 22:00:51.986: INFO: Got endpoints: latency-svc-wbbkn [748.860877ms] May 13 22:00:51.996: INFO: Created: latency-svc-jws7n May 13 22:00:52.037: INFO: Got endpoints: latency-svc-8rqph [750.495154ms] May 13 22:00:52.042: INFO: Created: latency-svc-zz27x May 13 22:00:52.136: INFO: Got endpoints: latency-svc-ctn85 [797.566932ms] May 13 22:00:52.141: INFO: Created: latency-svc-xkslr May 13 22:00:52.187: INFO: Got endpoints: latency-svc-pfxwh [801.019963ms] May 13 22:00:52.193: INFO: Created: latency-svc-gxnbx May 13 22:00:52.236: INFO: Got endpoints: latency-svc-mqj29 [798.837644ms] May 13 22:00:52.241: INFO: Created: latency-svc-brv9j May 13 22:00:52.287: INFO: Got endpoints: latency-svc-qppfm [800.199235ms] May 13 22:00:52.293: INFO: Created: latency-svc-7kr6f May 13 22:00:52.337: INFO: Got endpoints: latency-svc-v5fjg [800.02249ms] May 13 22:00:52.343: INFO: Created: latency-svc-l2597 May 13 22:00:52.386: INFO: Got endpoints: latency-svc-xzdqv [799.727474ms] May 13 22:00:52.391: INFO: Created: latency-svc-rg6wr May 13 22:00:52.436: INFO: Got endpoints: latency-svc-2dngp [799.521944ms] May 13 22:00:52.442: INFO: Created: latency-svc-kv7hj May 13 22:00:52.488: INFO: Got endpoints: latency-svc-frps9 [800.196579ms] May 13 22:00:52.493: INFO: Created: latency-svc-7l87x May 13 22:00:52.537: INFO: Got endpoints: latency-svc-gnth9 [800.446189ms] May 13 22:00:52.542: INFO: Created: latency-svc-vn4tp May 13 22:00:52.586: INFO: Got endpoints: latency-svc-vmqsr [800.065939ms] May 13 22:00:52.592: INFO: Created: latency-svc-cg85s May 13 22:00:52.637: INFO: Got endpoints: latency-svc-c8wfx [799.954762ms] May 13 22:00:52.642: INFO: Created: latency-svc-fks6b May 13 22:00:52.687: INFO: Got endpoints: latency-svc-9x7w4 [800.268742ms] May 13 22:00:52.692: INFO: Created: latency-svc-b6qnk May 13 22:00:52.737: INFO: Got endpoints: latency-svc-92j8w [799.108792ms] May 13 22:00:52.742: INFO: Created: latency-svc-bldf4 May 13 22:00:52.786: INFO: Got endpoints: latency-svc-jws7n [800.109545ms] May 13 22:00:52.791: INFO: Created: latency-svc-6x6w4 May 13 22:00:52.837: INFO: Got endpoints: latency-svc-zz27x [799.577736ms] May 13 22:00:52.842: INFO: Created: latency-svc-x95wj May 13 22:00:52.887: INFO: Got endpoints: latency-svc-xkslr [751.225785ms] May 13 22:00:52.893: INFO: Created: latency-svc-wch65 May 13 22:00:52.938: INFO: Got endpoints: latency-svc-gxnbx [750.600292ms] May 13 22:00:52.943: INFO: Created: latency-svc-f7rk8 May 13 22:00:52.987: INFO: Got endpoints: latency-svc-brv9j [750.93077ms] May 13 22:00:52.992: INFO: Created: latency-svc-mlrr7 May 13 22:00:53.036: INFO: Got endpoints: latency-svc-7kr6f [749.42036ms] May 13 22:00:53.043: INFO: Created: latency-svc-c95nc May 13 22:00:53.137: INFO: Got endpoints: latency-svc-l2597 [800.038677ms] May 13 22:00:53.143: INFO: Created: latency-svc-9dsr7 May 13 22:00:53.187: INFO: Got endpoints: latency-svc-rg6wr [800.450582ms] May 13 22:00:53.193: INFO: Created: latency-svc-kxn8x May 13 22:00:53.238: INFO: Got endpoints: latency-svc-kv7hj [800.987365ms] May 13 22:00:53.246: INFO: Created: latency-svc-96jbr May 13 22:00:53.286: INFO: Got endpoints: latency-svc-7l87x [798.190527ms] May 13 22:00:53.292: INFO: Created: latency-svc-25lrd May 13 22:00:53.336: INFO: Got endpoints: latency-svc-vn4tp [799.151019ms] May 13 22:00:53.342: INFO: Created: latency-svc-nk7zr May 13 22:00:53.387: INFO: Got endpoints: latency-svc-cg85s [800.655108ms] May 13 22:00:53.392: INFO: Created: latency-svc-88mz5 May 13 22:00:53.438: INFO: Got endpoints: latency-svc-fks6b [800.900103ms] May 13 22:00:53.445: INFO: Created: latency-svc-78wx5 May 13 22:00:53.487: INFO: Got endpoints: latency-svc-b6qnk [799.305862ms] May 13 22:00:53.493: INFO: Created: latency-svc-rcpkd May 13 22:00:53.537: INFO: Got endpoints: latency-svc-bldf4 [800.103102ms] May 13 22:00:53.543: INFO: Created: latency-svc-hx2xp May 13 22:00:53.587: INFO: Got endpoints: latency-svc-6x6w4 [800.679449ms] May 13 22:00:53.595: INFO: Created: latency-svc-5sjk8 May 13 22:00:53.637: INFO: Got endpoints: latency-svc-x95wj [800.309916ms] May 13 22:00:53.643: INFO: Created: latency-svc-54cgx May 13 22:00:53.687: INFO: Got endpoints: latency-svc-wch65 [799.755472ms] May 13 22:00:53.693: INFO: Created: latency-svc-9x2hc May 13 22:00:53.736: INFO: Got endpoints: latency-svc-f7rk8 [798.409068ms] May 13 22:00:53.742: INFO: Created: latency-svc-q685w May 13 22:00:53.787: INFO: Got endpoints: latency-svc-mlrr7 [799.2677ms] May 13 22:00:53.793: INFO: Created: latency-svc-htwcl May 13 22:00:53.837: INFO: Got endpoints: latency-svc-c95nc [800.799238ms] May 13 22:00:53.843: INFO: Created: latency-svc-dm75j May 13 22:00:53.887: INFO: Got endpoints: latency-svc-9dsr7 [750.075614ms] May 13 22:00:53.892: INFO: Created: latency-svc-5cd7x May 13 22:00:53.938: INFO: Got endpoints: latency-svc-kxn8x [750.9604ms] May 13 22:00:53.943: INFO: Created: latency-svc-w2zfb May 13 22:00:53.988: INFO: Got endpoints: latency-svc-96jbr [750.21646ms] May 13 22:00:53.993: INFO: Created: latency-svc-d6f5v May 13 22:00:54.036: INFO: Got endpoints: latency-svc-25lrd [750.059922ms] May 13 22:00:54.042: INFO: Created: latency-svc-jrmxk May 13 22:00:54.087: INFO: Got endpoints: latency-svc-nk7zr [750.166699ms] May 13 22:00:54.093: INFO: Created: latency-svc-pjc9k May 13 22:00:54.138: INFO: Got endpoints: latency-svc-88mz5 [751.39627ms] May 13 22:00:54.144: INFO: Created: latency-svc-x6v9z May 13 22:00:54.187: INFO: Got endpoints: latency-svc-78wx5 [749.127679ms] May 13 22:00:54.192: INFO: Created: latency-svc-5xslx May 13 22:00:54.237: INFO: Got endpoints: latency-svc-rcpkd [750.372966ms] May 13 22:00:54.242: INFO: Created: latency-svc-hj6pj May 13 22:00:54.287: INFO: Got endpoints: latency-svc-hx2xp [750.369572ms] May 13 22:00:54.292: INFO: Created: latency-svc-vzqnn May 13 22:00:54.337: INFO: Got endpoints: latency-svc-5sjk8 [750.522888ms] May 13 22:00:54.343: INFO: Created: latency-svc-8h4kg May 13 22:00:54.388: INFO: Got endpoints: latency-svc-54cgx [751.004493ms] May 13 22:00:54.395: INFO: Created: latency-svc-x4v9s May 13 22:00:54.437: INFO: Got endpoints: latency-svc-9x2hc [749.647635ms] May 13 22:00:54.442: INFO: Created: latency-svc-9ldhv May 13 22:00:54.487: INFO: Got endpoints: latency-svc-q685w [750.618903ms] May 13 22:00:54.493: INFO: Created: latency-svc-9nzzv May 13 22:00:54.538: INFO: Got endpoints: latency-svc-htwcl [751.768191ms] May 13 22:00:54.544: INFO: Created: latency-svc-vvnwz May 13 22:00:54.586: INFO: Got endpoints: latency-svc-dm75j [748.826027ms] May 13 22:00:54.593: INFO: Created: latency-svc-fwb67 May 13 22:00:54.637: INFO: Got endpoints: latency-svc-5cd7x [750.248781ms] May 13 22:00:54.643: INFO: Created: latency-svc-4xstq May 13 22:00:54.687: INFO: Got endpoints: latency-svc-w2zfb [749.539024ms] May 13 22:00:54.693: INFO: Created: latency-svc-xq682 May 13 22:00:54.738: INFO: Got endpoints: latency-svc-d6f5v [749.839144ms] May 13 22:00:54.745: INFO: Created: latency-svc-k7nmp May 13 22:00:54.787: INFO: Got endpoints: latency-svc-jrmxk [750.522411ms] May 13 22:00:54.793: INFO: Created: latency-svc-4q57z May 13 22:00:54.837: INFO: Got endpoints: latency-svc-pjc9k [750.49828ms] May 13 22:00:54.843: INFO: Created: latency-svc-v8dhp May 13 22:00:54.887: INFO: Got endpoints: latency-svc-x6v9z [748.866038ms] May 13 22:00:54.893: INFO: Created: latency-svc-xwppz May 13 22:00:54.937: INFO: Got endpoints: latency-svc-5xslx [750.617113ms] May 13 22:00:54.943: INFO: Created: latency-svc-25v2t May 13 22:00:54.986: INFO: Got endpoints: latency-svc-hj6pj [749.18724ms] May 13 22:00:54.993: INFO: Created: latency-svc-67r4c May 13 22:00:55.036: INFO: Got endpoints: latency-svc-vzqnn [749.14776ms] May 13 22:00:55.042: INFO: Created: latency-svc-crvpx May 13 22:00:55.086: INFO: Got endpoints: latency-svc-8h4kg [748.929063ms] May 13 22:00:55.094: INFO: Created: latency-svc-gdhpk May 13 22:00:55.136: INFO: Got endpoints: latency-svc-x4v9s [748.348906ms] May 13 22:00:55.142: INFO: Created: latency-svc-hkkf4 May 13 22:00:55.187: INFO: Got endpoints: latency-svc-9ldhv [749.789951ms] May 13 22:00:55.192: INFO: Created: latency-svc-8vjlw May 13 22:00:55.236: INFO: Got endpoints: latency-svc-9nzzv [749.163451ms] May 13 22:00:55.241: INFO: Created: latency-svc-q9446 May 13 22:00:55.287: INFO: Got endpoints: latency-svc-vvnwz [748.169074ms] May 13 22:00:55.292: INFO: Created: latency-svc-lf6cf May 13 22:00:55.337: INFO: Got endpoints: latency-svc-fwb67 [751.140542ms] May 13 22:00:55.343: INFO: Created: latency-svc-dc6vd May 13 22:00:55.387: INFO: Got endpoints: latency-svc-4xstq [749.496765ms] May 13 22:00:55.393: INFO: Created: latency-svc-h6dqs May 13 22:00:55.436: INFO: Got endpoints: latency-svc-xq682 [748.555552ms] May 13 22:00:55.441: INFO: Created: latency-svc-n98tz May 13 22:00:55.486: INFO: Got endpoints: latency-svc-k7nmp [748.738243ms] May 13 22:00:55.492: INFO: Created: latency-svc-tx95w May 13 22:00:55.537: INFO: Got endpoints: latency-svc-4q57z [750.097243ms] May 13 22:00:55.542: INFO: Created: latency-svc-vwq4g May 13 22:00:55.586: INFO: Got endpoints: latency-svc-v8dhp [748.541514ms] May 13 22:00:55.591: INFO: Created: latency-svc-czqkg May 13 22:00:55.636: INFO: Got endpoints: latency-svc-xwppz [748.814109ms] May 13 22:00:55.642: INFO: Created: latency-svc-d2jhp May 13 22:00:55.688: INFO: Got endpoints: latency-svc-25v2t [750.173959ms] May 13 22:00:55.693: INFO: Created: latency-svc-kxxvk May 13 22:00:55.737: INFO: Got endpoints: latency-svc-67r4c [750.603146ms] May 13 22:00:55.743: INFO: Created: latency-svc-z4qcs May 13 22:00:55.786: INFO: Got endpoints: latency-svc-crvpx [749.964926ms] May 13 22:00:55.792: INFO: Created: latency-svc-lpdf2 May 13 22:00:55.836: INFO: Got endpoints: latency-svc-gdhpk [749.555047ms] May 13 22:00:55.841: INFO: Created: latency-svc-9r7rw May 13 22:00:55.887: INFO: Got endpoints: latency-svc-hkkf4 [750.67142ms] May 13 22:00:55.893: INFO: Created: latency-svc-xrhmj May 13 22:00:55.938: INFO: Got endpoints: latency-svc-8vjlw [750.973659ms] May 13 22:00:55.944: INFO: Created: latency-svc-xfmjh May 13 22:00:55.987: INFO: Got endpoints: latency-svc-q9446 [750.953678ms] May 13 22:00:55.993: INFO: Created: latency-svc-mzrkt May 13 22:00:56.036: INFO: Got endpoints: latency-svc-lf6cf [749.645648ms] May 13 22:00:56.042: INFO: Created: latency-svc-fbmjv May 13 22:00:56.086: INFO: Got endpoints: latency-svc-dc6vd [748.408974ms] May 13 22:00:56.091: INFO: Created: latency-svc-lg2gx May 13 22:00:56.137: INFO: Got endpoints: latency-svc-h6dqs [749.931256ms] May 13 22:00:56.143: INFO: Created: latency-svc-kpwfw May 13 22:00:56.187: INFO: Got endpoints: latency-svc-n98tz [751.081522ms] May 13 22:00:56.193: INFO: Created: latency-svc-26nzw May 13 22:00:56.239: INFO: Got endpoints: latency-svc-tx95w [752.429751ms] May 13 22:00:56.246: INFO: Created: latency-svc-bxjv4 May 13 22:00:56.287: INFO: Got endpoints: latency-svc-vwq4g [750.105537ms] May 13 22:00:56.293: INFO: Created: latency-svc-qqxt7 May 13 22:00:56.336: INFO: Got endpoints: latency-svc-czqkg [750.616684ms] May 13 22:00:56.341: INFO: Created: latency-svc-6ls58 May 13 22:00:56.387: INFO: Got endpoints: latency-svc-d2jhp [750.853535ms] May 13 22:00:56.392: INFO: Created: latency-svc-vgthx May 13 22:00:56.437: INFO: Got endpoints: latency-svc-kxxvk [749.642511ms] May 13 22:00:56.443: INFO: Created: latency-svc-jd7gz May 13 22:00:56.487: INFO: Got endpoints: latency-svc-z4qcs [750.471301ms] May 13 22:00:56.493: INFO: Created: latency-svc-q8jnq May 13 22:00:56.538: INFO: Got endpoints: latency-svc-lpdf2 [751.031557ms] May 13 22:00:56.543: INFO: Created: latency-svc-5jztw May 13 22:00:56.587: INFO: Got endpoints: latency-svc-9r7rw [750.743806ms] May 13 22:00:56.594: INFO: Created: latency-svc-54nfr May 13 22:00:56.637: INFO: Got endpoints: latency-svc-xrhmj [749.58771ms] May 13 22:00:56.642: INFO: Created: latency-svc-xzvwm May 13 22:00:56.687: INFO: Got endpoints: latency-svc-xfmjh [749.028061ms] May 13 22:00:56.692: INFO: Created: latency-svc-mzvwh May 13 22:00:56.737: INFO: Got endpoints: latency-svc-mzrkt [750.004177ms] May 13 22:00:56.745: INFO: Created: latency-svc-2jfm4 May 13 22:00:56.787: INFO: Got endpoints: latency-svc-fbmjv [750.62675ms] May 13 22:00:56.792: INFO: Created: latency-svc-k5jmc May 13 22:00:56.837: INFO: Got endpoints: latency-svc-lg2gx [750.80717ms] May 13 22:00:56.843: INFO: Created: latency-svc-5dj72 May 13 22:00:56.886: INFO: Got endpoints: latency-svc-kpwfw [749.455689ms] May 13 22:00:56.892: INFO: Created: latency-svc-5xdxz May 13 22:00:56.937: INFO: Got endpoints: latency-svc-26nzw [750.141979ms] May 13 22:00:56.943: INFO: Created: latency-svc-flz9x May 13 22:00:56.987: INFO: Got endpoints: latency-svc-bxjv4 [747.76374ms] May 13 22:00:56.994: INFO: Created: latency-svc-qd489 May 13 22:00:57.037: INFO: Got endpoints: latency-svc-qqxt7 [749.500761ms] May 13 22:00:57.042: INFO: Created: latency-svc-58hbg May 13 22:00:57.087: INFO: Got endpoints: latency-svc-6ls58 [750.835768ms] May 13 22:00:57.094: INFO: Created: latency-svc-qdjqp May 13 22:00:57.137: INFO: Got endpoints: latency-svc-vgthx [749.284305ms] May 13 22:00:57.144: INFO: Created: latency-svc-xbnnd May 13 22:00:57.187: INFO: Got endpoints: latency-svc-jd7gz [749.260538ms] May 13 22:00:57.192: INFO: Created: latency-svc-qm7tn May 13 22:00:57.238: INFO: Got endpoints: latency-svc-q8jnq [750.23466ms] May 13 22:00:57.246: INFO: Created: latency-svc-2hqx6 May 13 22:00:57.286: INFO: Got endpoints: latency-svc-5jztw [748.735523ms] May 13 22:00:57.292: INFO: Created: latency-svc-d44tg May 13 22:00:57.338: INFO: Got endpoints: latency-svc-54nfr [750.68128ms] May 13 22:00:57.344: INFO: Created: latency-svc-7bcw8 May 13 22:00:57.388: INFO: Got endpoints: latency-svc-xzvwm [751.029931ms] May 13 22:00:57.394: INFO: Created: latency-svc-8sxnk May 13 22:00:57.437: INFO: Got endpoints: latency-svc-mzvwh [750.563328ms] May 13 22:00:57.443: INFO: Created: latency-svc-7ddv8 May 13 22:00:57.488: INFO: Got endpoints: latency-svc-2jfm4 [751.041347ms] May 13 22:00:57.494: INFO: Created: latency-svc-prlm7 May 13 22:00:57.538: INFO: Got endpoints: latency-svc-k5jmc [751.536787ms] May 13 22:00:57.544: INFO: Created: latency-svc-k5n2l May 13 22:00:57.587: INFO: Got endpoints: latency-svc-5dj72 [750.046162ms] May 13 22:00:57.592: INFO: Created: latency-svc-zjhkv May 13 22:00:57.637: INFO: Got endpoints: latency-svc-5xdxz [750.670589ms] May 13 22:00:57.642: INFO: Created: latency-svc-69ff7 May 13 22:00:57.686: INFO: Got endpoints: latency-svc-flz9x [749.241804ms] May 13 22:00:57.691: INFO: Created: latency-svc-7xsj9 May 13 22:00:57.736: INFO: Got endpoints: latency-svc-qd489 [748.571822ms] May 13 22:00:57.741: INFO: Created: latency-svc-5mhrc May 13 22:00:57.786: INFO: Got endpoints: latency-svc-58hbg [749.617244ms] May 13 22:00:57.792: INFO: Created: latency-svc-c2ls6 May 13 22:00:57.837: INFO: Got endpoints: latency-svc-qdjqp [750.025573ms] May 13 22:00:57.842: INFO: Created: latency-svc-plmtf May 13 22:00:57.887: INFO: Got endpoints: latency-svc-xbnnd [750.395275ms] May 13 22:00:57.936: INFO: Got endpoints: latency-svc-qm7tn [749.208307ms] May 13 22:00:57.986: INFO: Got endpoints: latency-svc-2hqx6 [748.764714ms] May 13 22:00:58.037: INFO: Got endpoints: latency-svc-d44tg [750.520679ms] May 13 22:00:58.086: INFO: Got endpoints: latency-svc-7bcw8 [748.620247ms] May 13 22:00:58.136: INFO: Got endpoints: latency-svc-8sxnk [748.607589ms] May 13 22:00:58.186: INFO: Got endpoints: latency-svc-7ddv8 [748.644109ms] May 13 22:00:58.236: INFO: Got endpoints: latency-svc-prlm7 [747.135067ms] May 13 22:00:58.287: INFO: Got endpoints: latency-svc-k5n2l [748.214462ms] May 13 22:00:58.336: INFO: Got endpoints: latency-svc-zjhkv [748.913708ms] May 13 22:00:58.386: INFO: Got endpoints: latency-svc-69ff7 [748.950012ms] May 13 22:00:58.436: INFO: Got endpoints: latency-svc-7xsj9 [749.648752ms] May 13 22:00:58.486: INFO: Got endpoints: latency-svc-5mhrc [750.084977ms] May 13 22:00:58.537: INFO: Got endpoints: latency-svc-c2ls6 [750.916375ms] May 13 22:00:58.587: INFO: Got endpoints: latency-svc-plmtf [749.508008ms] May 13 22:00:58.587: INFO: Latencies: [9.881708ms 9.970813ms 14.321435ms 17.906422ms 19.446348ms 22.414135ms 25.315529ms 28.362319ms 30.029378ms 34.195824ms 36.586672ms 39.430226ms 42.031995ms 42.550587ms 42.713537ms 43.772315ms 44.070873ms 45.08698ms 45.233361ms 45.593574ms 46.055672ms 46.246622ms 46.385197ms 46.442112ms 46.462315ms 46.645713ms 46.774416ms 47.03504ms 47.083798ms 47.504998ms 48.40946ms 95.331537ms 141.506714ms 185.666621ms 236.397538ms 279.682086ms 326.492641ms 374.894922ms 421.488084ms 468.361675ms 516.77299ms 561.305927ms 609.484073ms 657.495851ms 705.88607ms 747.135067ms 747.75993ms 747.76374ms 748.169074ms 748.214462ms 748.348906ms 748.408974ms 748.541514ms 748.555552ms 748.571822ms 748.607589ms 748.620247ms 748.644109ms 748.735523ms 748.738243ms 748.764714ms 748.814109ms 748.826027ms 748.860877ms 748.866038ms 748.890389ms 748.913708ms 748.929063ms 748.950012ms 749.028061ms 749.127679ms 749.14776ms 749.163451ms 749.174422ms 749.18724ms 749.208307ms 749.241804ms 749.260538ms 749.284305ms 749.322529ms 749.389162ms 749.42036ms 749.455689ms 749.484709ms 749.496765ms 749.500761ms 749.508008ms 749.524019ms 749.539024ms 749.555047ms 749.563936ms 749.568809ms 749.58771ms 749.617244ms 749.642511ms 749.645648ms 749.647635ms 749.648752ms 749.663578ms 749.731294ms 749.737984ms 749.789951ms 749.829665ms 749.839144ms 749.868298ms 749.920756ms 749.926309ms 749.931256ms 749.951239ms 749.964926ms 749.970102ms 750.004177ms 750.025573ms 750.046162ms 750.059922ms 750.075614ms 750.084977ms 750.097243ms 750.105537ms 750.141979ms 750.166699ms 750.173959ms 750.21646ms 750.23466ms 750.248781ms 750.369572ms 750.372966ms 750.395275ms 750.471301ms 750.495154ms 750.49828ms 750.520679ms 750.522411ms 750.522888ms 750.55678ms 750.563328ms 750.600292ms 750.603146ms 750.616684ms 750.617113ms 750.618903ms 750.62675ms 750.670589ms 750.67142ms 750.68128ms 750.743806ms 750.789048ms 750.80717ms 750.830625ms 750.835768ms 750.853535ms 750.916375ms 750.93077ms 750.953678ms 750.9604ms 750.973659ms 751.004493ms 751.029931ms 751.031557ms 751.041347ms 751.060966ms 751.081522ms 751.140542ms 751.203017ms 751.225785ms 751.39627ms 751.536787ms 751.768191ms 752.192335ms 752.429751ms 797.566932ms 798.190527ms 798.409068ms 798.837644ms 799.108792ms 799.151019ms 799.2677ms 799.305862ms 799.521944ms 799.577736ms 799.727474ms 799.755472ms 799.954762ms 800.02249ms 800.038677ms 800.065939ms 800.103102ms 800.109545ms 800.196579ms 800.199235ms 800.268742ms 800.309916ms 800.446189ms 800.450582ms 800.655108ms 800.679449ms 800.799238ms 800.900103ms 800.987365ms 801.019963ms] May 13 22:00:58.587: INFO: 50 %ile: 749.737984ms May 13 22:00:58.587: INFO: 90 %ile: 799.727474ms May 13 22:00:58.587: INFO: 99 %ile: 800.987365ms May 13 22:00:58.587: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:58.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3824" for this suite. • [SLOW TEST:12.874 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":17,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:52.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:52.699: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03" in namespace "security-context-test-5031" to be "Succeeded or Failed" May 13 22:00:52.701: INFO: Pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236666ms May 13 22:00:54.705: INFO: Pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006484272s May 13 22:00:56.709: INFO: Pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010645747s May 13 22:00:58.712: INFO: Pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01332496s May 13 22:00:58.712: INFO: Pod "alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:58.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5031" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":114,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:58.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-faa29aa1-d2b6-40aa-a9b8-0f600ed4b9cc [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:00:58.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3814" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:45.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 13 22:00:46.282: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 13 22:00:48.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:00:50.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076046, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:00:53.303: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:53.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:01.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1543" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.690 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":21,"skipped":360,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:58.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:58.930: INFO: The status of Pod busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:00.933: INFO: The status of Pod busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:02.933: INFO: The status of Pod busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:04.934: INFO: The status of Pod busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:04.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7832" for this suite. • [SLOW TEST:6.059 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":167,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:35.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4398 STEP: creating service affinity-nodeport-transition in namespace services-4398 STEP: creating replication controller affinity-nodeport-transition in namespace services-4398 I0513 21:58:35.658779 23 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4398, replica count: 3 I0513 21:58:38.709348 23 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:58:41.709881 23 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 21:58:44.710474 23 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 21:58:44.719: INFO: Creating new exec pod May 13 21:58:49.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' May 13 21:58:50.213: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 13 21:58:50.213: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 21:58:50.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.19.202 80' May 13 21:58:50.500: INFO: stderr: "+ nc -v -t -w 2 10.233.19.202 80\n+ echo hostName\nConnection to 10.233.19.202 80 port [tcp/http] succeeded!\n" May 13 21:58:50.500: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 21:58:50.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:50.754: INFO: rc: 1 May 13 21:58:50.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:51.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:52.487: INFO: rc: 1 May 13 21:58:52.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:52.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:53.015: INFO: rc: 1 May 13 21:58:53.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:53.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:54.030: INFO: rc: 1 May 13 21:58:54.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:54.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:55.013: INFO: rc: 1 May 13 21:58:55.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:55.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:56.038: INFO: rc: 1 May 13 21:58:56.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:56.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:57.034: INFO: rc: 1 May 13 21:58:57.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:57.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:57.998: INFO: rc: 1 May 13 21:58:57.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:58.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:58:59.058: INFO: rc: 1 May 13 21:58:59.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:58:59.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:00.096: INFO: rc: 1 May 13 21:59:00.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:00.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:01.036: INFO: rc: 1 May 13 21:59:01.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:01.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:01.999: INFO: rc: 1 May 13 21:59:01.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:02.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:03.144: INFO: rc: 1 May 13 21:59:03.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:03.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:04.043: INFO: rc: 1 May 13 21:59:04.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:04.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:05.148: INFO: rc: 1 May 13 21:59:05.148: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:05.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:06.002: INFO: rc: 1 May 13 21:59:06.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:06.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:07.016: INFO: rc: 1 May 13 21:59:07.016: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:07.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:08.120: INFO: rc: 1 May 13 21:59:08.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:08.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:09.077: INFO: rc: 1 May 13 21:59:09.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:09.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:10.010: INFO: rc: 1 May 13 21:59:10.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:10.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:11.023: INFO: rc: 1 May 13 21:59:11.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:11.998: INFO: rc: 1 May 13 21:59:11.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:12.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:13.852: INFO: rc: 1 May 13 21:59:13.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:14.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:15.252: INFO: rc: 1 May 13 21:59:15.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:15.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:16.002: INFO: rc: 1 May 13 21:59:16.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:16.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:16.996: INFO: rc: 1 May 13 21:59:16.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:17.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:18.013: INFO: rc: 1 May 13 21:59:18.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:18.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:19.010: INFO: rc: 1 May 13 21:59:19.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:19.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:20.019: INFO: rc: 1 May 13 21:59:20.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:20.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:21.039: INFO: rc: 1 May 13 21:59:21.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:21.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:22.046: INFO: rc: 1 May 13 21:59:22.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:22.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:23.164: INFO: rc: 1 May 13 21:59:23.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:23.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:24.039: INFO: rc: 1 May 13 21:59:24.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:24.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:25.031: INFO: rc: 1 May 13 21:59:25.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:25.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:26.021: INFO: rc: 1 May 13 21:59:26.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:26.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:27.642: INFO: rc: 1 May 13 21:59:27.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:27.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:28.039: INFO: rc: 1 May 13 21:59:28.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:28.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:29.183: INFO: rc: 1 May 13 21:59:29.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:29.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:30.158: INFO: rc: 1 May 13 21:59:30.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:30.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:31.001: INFO: rc: 1 May 13 21:59:31.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:31.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:31.997: INFO: rc: 1 May 13 21:59:31.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:32.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:33.003: INFO: rc: 1 May 13 21:59:33.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:33.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:34.040: INFO: rc: 1 May 13 21:59:34.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:34.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:35.014: INFO: rc: 1 May 13 21:59:35.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:35.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:35.983: INFO: rc: 1 May 13 21:59:35.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:36.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:37.032: INFO: rc: 1 May 13 21:59:37.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:37.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:38.040: INFO: rc: 1 May 13 21:59:38.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 + echo hostName nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:38.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:39.082: INFO: rc: 1 May 13 21:59:39.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:39.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:40.007: INFO: rc: 1 May 13 21:59:40.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:40.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:41.063: INFO: rc: 1 May 13 21:59:41.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:41.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:42.015: INFO: rc: 1 May 13 21:59:42.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:42.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:43.932: INFO: rc: 1 May 13 21:59:43.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:44.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:44.992: INFO: rc: 1 May 13 21:59:44.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:45.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:46.024: INFO: rc: 1 May 13 21:59:46.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:46.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:47.164: INFO: rc: 1 May 13 21:59:47.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:47.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:47.999: INFO: rc: 1 May 13 21:59:47.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:48.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:48.989: INFO: rc: 1 May 13 21:59:48.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:49.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:50.049: INFO: rc: 1 May 13 21:59:50.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:50.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:51.136: INFO: rc: 1 May 13 21:59:51.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:51.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:52.044: INFO: rc: 1 May 13 21:59:52.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:52.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:53.030: INFO: rc: 1 May 13 21:59:53.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:53.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:54.026: INFO: rc: 1 May 13 21:59:54.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:54.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:55.020: INFO: rc: 1 May 13 21:59:55.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:55.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:56.159: INFO: rc: 1 May 13 21:59:56.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:56.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:57.010: INFO: rc: 1 May 13 21:59:57.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:57.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:58.010: INFO: rc: 1 May 13 21:59:58.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:58.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:58.998: INFO: rc: 1 May 13 21:59:58.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 21:59:59.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 21:59:59.996: INFO: rc: 1 May 13 21:59:59.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:00.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:01.037: INFO: rc: 1 May 13 22:00:01.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:01.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:02.105: INFO: rc: 1 May 13 22:00:02.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:02.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:03.178: INFO: rc: 1 May 13 22:00:03.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:03.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:05.438: INFO: rc: 1 May 13 22:00:05.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:05.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:06.703: INFO: rc: 1 May 13 22:00:06.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:06.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:07.683: INFO: rc: 1 May 13 22:00:07.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:07.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:09.209: INFO: rc: 1 May 13 22:00:09.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:09.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:10.031: INFO: rc: 1 May 13 22:00:10.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:10.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:11.178: INFO: rc: 1 May 13 22:00:11.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:12.021: INFO: rc: 1 May 13 22:00:12.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:12.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:13.933: INFO: rc: 1 May 13 22:00:13.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:14.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:15.016: INFO: rc: 1 May 13 22:00:15.016: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:15.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:16.003: INFO: rc: 1 May 13 22:00:16.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:16.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:17.018: INFO: rc: 1 May 13 22:00:17.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:17.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:18.006: INFO: rc: 1 May 13 22:00:18.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:18.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:19.020: INFO: rc: 1 May 13 22:00:19.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:19.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:19.997: INFO: rc: 1 May 13 22:00:19.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:20.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:21.021: INFO: rc: 1 May 13 22:00:21.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:21.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:22.008: INFO: rc: 1 May 13 22:00:22.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:22.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:22.994: INFO: rc: 1 May 13 22:00:22.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:23.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:24.008: INFO: rc: 1 May 13 22:00:24.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:24.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:25.019: INFO: rc: 1 May 13 22:00:25.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:25.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:25.984: INFO: rc: 1 May 13 22:00:25.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:26.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:27.014: INFO: rc: 1 May 13 22:00:27.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:27.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:28.049: INFO: rc: 1 May 13 22:00:28.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:28.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:29.001: INFO: rc: 1 May 13 22:00:29.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:29.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:30.036: INFO: rc: 1 May 13 22:00:30.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:30.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:30.987: INFO: rc: 1 May 13 22:00:30.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:31.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:32.018: INFO: rc: 1 May 13 22:00:32.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:32.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:33.035: INFO: rc: 1 May 13 22:00:33.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:33.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:34.056: INFO: rc: 1 May 13 22:00:34.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:34.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:35.004: INFO: rc: 1 May 13 22:00:35.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:35.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:35.988: INFO: rc: 1 May 13 22:00:35.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:36.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:37.014: INFO: rc: 1 May 13 22:00:37.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:37.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:38.109: INFO: rc: 1 May 13 22:00:38.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:38.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:39.093: INFO: rc: 1 May 13 22:00:39.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:39.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:40.011: INFO: rc: 1 May 13 22:00:40.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:40.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:41.066: INFO: rc: 1 May 13 22:00:41.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:41.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:42.430: INFO: rc: 1 May 13 22:00:42.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:42.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:44.176: INFO: rc: 1 May 13 22:00:44.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:44.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:45.007: INFO: rc: 1 May 13 22:00:45.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:45.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:46.147: INFO: rc: 1 May 13 22:00:46.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:46.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:47.052: INFO: rc: 1 May 13 22:00:47.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:47.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:48.024: INFO: rc: 1 May 13 22:00:48.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:48.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:48.992: INFO: rc: 1 May 13 22:00:48.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:49.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:50.012: INFO: rc: 1 May 13 22:00:50.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:50.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:51.011: INFO: rc: 1 May 13 22:00:51.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:51.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322' May 13 22:00:51.250: INFO: rc: 1 May 13 22:00:51.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4398 exec execpod-affinitybf22h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31322: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31322 nc: connect to 10.10.190.207 port 31322 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:00:51.251: FAIL: Unexpected error: <*errors.errorString | 0xc0047a0360>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31322 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31322 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0010e14a0, 0x77b33d8, 0xc004e9cdc0, 0xc005180780, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000183800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000183800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000183800, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 13 22:00:51.252: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4398, will wait for the garbage collector to delete the pods May 13 22:00:51.326: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.620311ms May 13 22:00:51.427: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.748774ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4398". STEP: Found 27 events. May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-np7sm May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-rv2gq May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-swpvj May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition-np7sm: {default-scheduler } Scheduled: Successfully assigned services-4398/affinity-nodeport-transition-np7sm to node1 May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {default-scheduler } Scheduled: Successfully assigned services-4398/affinity-nodeport-transition-rv2gq to node1 May 13 22:01:02.543: INFO: At 2022-05-13 21:58:35 +0000 UTC - event for affinity-nodeport-transition-swpvj: {default-scheduler } Scheduled: Successfully assigned services-4398/affinity-nodeport-transition-swpvj to node2 May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-np7sm: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-np7sm: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 284.424448ms May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-np7sm: {kubelet node1} Created: Created container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 623.186684ms May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-swpvj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 399.008636ms May 13 22:01:02.543: INFO: At 2022-05-13 21:58:38 +0000 UTC - event for affinity-nodeport-transition-swpvj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:01:02.543: INFO: At 2022-05-13 21:58:39 +0000 UTC - event for affinity-nodeport-transition-np7sm: {kubelet node1} Started: Started container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:39 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {kubelet node1} Created: Created container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:39 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {kubelet node1} Started: Started container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:40 +0000 UTC - event for affinity-nodeport-transition-swpvj: {kubelet node2} Created: Created container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:40 +0000 UTC - event for affinity-nodeport-transition-swpvj: {kubelet node2} Started: Started container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 21:58:44 +0000 UTC - event for execpod-affinitybf22h: {default-scheduler } Scheduled: Successfully assigned services-4398/execpod-affinitybf22h to node1 May 13 22:01:02.543: INFO: At 2022-05-13 21:58:46 +0000 UTC - event for execpod-affinitybf22h: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 371.970378ms May 13 22:01:02.543: INFO: At 2022-05-13 21:58:46 +0000 UTC - event for execpod-affinitybf22h: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:01:02.543: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpod-affinitybf22h: {kubelet node1} Created: Created container agnhost-container May 13 22:01:02.543: INFO: At 2022-05-13 21:58:47 +0000 UTC - event for execpod-affinitybf22h: {kubelet node1} Started: Started container agnhost-container May 13 22:01:02.543: INFO: At 2022-05-13 22:00:51 +0000 UTC - event for affinity-nodeport-transition-np7sm: {kubelet node1} Killing: Stopping container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 22:00:51 +0000 UTC - event for affinity-nodeport-transition-rv2gq: {kubelet node1} Killing: Stopping container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 22:00:51 +0000 UTC - event for affinity-nodeport-transition-swpvj: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 13 22:01:02.543: INFO: At 2022-05-13 22:00:51 +0000 UTC - event for execpod-affinitybf22h: {kubelet node1} Killing: Stopping container agnhost-container May 13 22:01:02.545: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:01:02.545: INFO: May 13 22:01:02.550: INFO: Logging node info for node master1 May 13 22:01:02.552: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 36583 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:02 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:02 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:02 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:01:02 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:01:02.553: INFO: Logging kubelet events for node master1 May 13 22:01:02.555: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:01:02.576: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.576: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:01:02.576: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.576: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:01:02.576: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.576: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:01:02.577: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:01:02.577: INFO: Init container install-cni ready: true, restart count 2 May 13 22:01:02.577: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:01:02.577: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.577: INFO: Container kube-multus ready: true, restart count 1 May 13 22:01:02.577: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.577: INFO: Container docker-registry ready: true, restart count 0 May 13 22:01:02.577: INFO: Container nginx ready: true, restart count 0 May 13 22:01:02.577: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.577: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:01:02.577: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.577: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:01:02.577: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.577: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:02.577: INFO: Container node-exporter ready: true, restart count 0 May 13 22:01:02.664: INFO: Latency metrics for node master1 May 13 22:01:02.664: INFO: Logging node info for node master2 May 13 22:01:02.666: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 36576 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:01:02.667: INFO: Logging kubelet events for node master2 May 13 22:01:02.669: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:01:02.676: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:01:02.676: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:01:02.676: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:02.676: INFO: Container node-exporter ready: true, restart count 0 May 13 22:01:02.676: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:01:02.676: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:01:02.676: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:01:02.676: INFO: Init container install-cni ready: true, restart count 2 May 13 22:01:02.676: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:01:02.676: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container kube-multus ready: true, restart count 1 May 13 22:01:02.676: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.676: INFO: Container coredns ready: true, restart count 1 May 13 22:01:02.760: INFO: Latency metrics for node master2 May 13 22:01:02.760: INFO: Logging node info for node master3 May 13 22:01:02.764: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 36532 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:00 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:00 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:00 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:01:00 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:01:02.764: INFO: Logging kubelet events for node master3 May 13 22:01:02.766: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:01:02.775: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container coredns ready: true, restart count 1 May 13 22:01:02.775: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:01:02.775: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-multus ready: true, restart count 1 May 13 22:01:02.775: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:01:02.775: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:01:02.775: INFO: Init container install-cni ready: true, restart count 0 May 13 22:01:02.775: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:01:02.775: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container autoscaler ready: true, restart count 1 May 13 22:01:02.775: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:02.775: INFO: Container node-exporter ready: true, restart count 0 May 13 22:01:02.775: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:01:02.775: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.775: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:01:02.852: INFO: Latency metrics for node master3 May 13 22:01:02.852: INFO: Logging node info for node node1 May 13 22:01:02.855: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 36581 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:01:01 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:01:02.857: INFO: Logging kubelet events for node node1 May 13 22:01:02.860: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:01:02.876: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.876: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:01:02.876: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.876: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:02.876: INFO: Container node-exporter ready: true, restart count 0 May 13 22:01:02.876: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.876: INFO: Container kube-multus ready: true, restart count 1 May 13 22:01:02.876: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.876: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:01:02.876: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:01:02.876: INFO: Container nodereport ready: true, restart count 0 May 13 22:01:02.876: INFO: Container reconcile ready: true, restart count 0 May 13 22:01:02.876: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:01:02.877: INFO: Init container install-cni ready: true, restart count 2 May 13 22:01:02.877: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:01:02.877: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:01:02.877: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:01:02.877: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:01:02.877: INFO: Container config-reloader ready: true, restart count 0 May 13 22:01:02.877: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:01:02.877: INFO: Container grafana ready: true, restart count 0 May 13 22:01:02.877: INFO: Container prometheus ready: true, restart count 1 May 13 22:01:02.877: INFO: alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03 started at 2022-05-13 22:00:52 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container alpine-nnp-false-b20c9a2b-557d-4b80-8d55-28be33b69e03 ready: false, restart count 0 May 13 22:01:02.877: INFO: forbid-27541321-6qs6l started at 2022-05-13 22:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container c ready: false, restart count 0 May 13 22:01:02.877: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:01:02.877: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:01:02.877: INFO: Container discover ready: false, restart count 0 May 13 22:01:02.877: INFO: Container init ready: false, restart count 0 May 13 22:01:02.877: INFO: Container install ready: false, restart count 0 May 13 22:01:02.877: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:01:02.877: INFO: Container collectd ready: true, restart count 0 May 13 22:01:02.877: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:01:02.877: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:01:02.877: INFO: pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f started at 2022-05-13 22:00:00 +0000 UTC (0+3 container statuses recorded) May 13 22:01:02.877: INFO: Container createcm-volume-test ready: true, restart count 0 May 13 22:01:02.877: INFO: Container delcm-volume-test ready: true, restart count 0 May 13 22:01:02.877: INFO: Container updcm-volume-test ready: true, restart count 0 May 13 22:01:02.877: INFO: test-webserver-9e0d337d-5f26-42ce-a270-201e2d55dd29 started at 2022-05-13 21:59:52 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container test-webserver ready: false, restart count 0 May 13 22:01:02.877: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:01:02.877: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:01:02.877: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:01:03.087: INFO: Latency metrics for node node1 May 13 22:01:03.087: INFO: Logging node info for node node2 May 13 22:01:03.091: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 36059 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:54 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:54 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:00:54 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:00:54 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:01:03.092: INFO: Logging kubelet events for node node2 May 13 22:01:03.095: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:01:03.108: INFO: busybox-e33fcb59-7a0e-46aa-8f4a-abcb91fcfab7 started at 2022-05-13 21:58:09 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container busybox ready: true, restart count 0 May 13 22:01:03.108: INFO: busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 started at 2022-05-13 22:00:58 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container busybox-host-aliasesfc40c649-3a73-428b-914c-4d7bbf2d5850 ready: false, restart count 0 May 13 22:01:03.108: INFO: terminate-cmd-rpof0e44ba6c-fc3c-4a52-9a06-c2e60339abea started at 2022-05-13 22:00:59 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container terminate-cmd-rpof ready: false, restart count 0 May 13 22:01:03.108: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:01:03.108: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:01:03.108: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:01:03.108: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:01:03.108: INFO: Container discover ready: false, restart count 0 May 13 22:01:03.108: INFO: Container init ready: false, restart count 0 May 13 22:01:03.108: INFO: Container install ready: false, restart count 0 May 13 22:01:03.108: INFO: pod-init-46dbb36a-23d6-4ab0-a5fb-e3917dbdf403 started at 2022-05-13 22:01:01 +0000 UTC (2+1 container statuses recorded) May 13 22:01:03.108: INFO: Init container init1 ready: false, restart count 0 May 13 22:01:03.108: INFO: Init container init2 ready: false, restart count 0 May 13 22:01:03.108: INFO: Container run1 ready: false, restart count 0 May 13 22:01:03.108: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:01:03.108: INFO: Init container install-cni ready: true, restart count 2 May 13 22:01:03.108: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:01:03.108: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:01:03.108: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:01:03.108: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:03.108: INFO: Container node-exporter ready: true, restart count 0 May 13 22:01:03.108: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:01:03.108: INFO: Container collectd ready: true, restart count 0 May 13 22:01:03.108: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:01:03.108: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:01:03.108: INFO: liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 started at 2022-05-13 21:59:04 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.108: INFO: Container agnhost-container ready: false, restart count 4 May 13 22:01:03.109: INFO: svc-latency-rc-jp846 started at 2022-05-13 22:00:45 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.109: INFO: Container svc-latency-rc ready: true, restart count 0 May 13 22:01:03.109: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.109: INFO: Container kube-multus ready: true, restart count 1 May 13 22:01:03.109: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:01:03.109: INFO: Container nodereport ready: true, restart count 0 May 13 22:01:03.109: INFO: Container reconcile ready: true, restart count 0 May 13 22:01:03.109: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:01:03.109: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:01:03.109: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:01:03.109: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.109: INFO: Container tas-extender ready: true, restart count 0 May 13 22:01:03.109: INFO: e2e-test-httpd-pod started at 2022-05-13 22:00:53 +0000 UTC (0+1 container statuses recorded) May 13 22:01:03.109: INFO: Container e2e-test-httpd-pod ready: false, restart count 1 May 13 22:01:05.943: INFO: Latency metrics for node node2 May 13 22:01:05.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4398" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [150.331 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:00:51.251: Unexpected error: <*errors.errorString | 0xc0047a0360>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31322 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31322 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:01.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 13 22:01:01.532: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:10.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7241" for this suite. • [SLOW TEST:8.619 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":22,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:40.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:11.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1911" for this suite. • [SLOW TEST:31.227 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:53.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 13 22:00:53.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8118 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 13 22:00:53.374: INFO: stderr: "" May 13 22:00:53.374: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 13 22:00:58.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8118 get pod e2e-test-httpd-pod -o json' May 13 22:00:58.615: INFO: stderr: "" May 13 22:00:58.615: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.18\\\"\\n ],\\n \\\"mac\\\": \\\"ee:88:f8:86:30:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.18\\\"\\n ],\\n \\\"mac\\\": \\\"ee:88:f8:86:30:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-05-13T22:00:53Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8118\",\n \"resourceVersion\": \"36224\",\n \"uid\": \"84ab3d2b-7362-4f0f-97e9-5b3643abb07d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-xv9hj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-xv9hj\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-13T22:00:53Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-13T22:00:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-13T22:00:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-13T22:00:53Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://d7a3c8609d615ed6bbd4146141e7ae03add4637a5108e75b56d1c0fbbf813ac6\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-05-13T22:00:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.18\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.18\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-05-13T22:00:53Z\"\n }\n}\n" STEP: replace the image in the pod May 13 22:00:58.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8118 replace -f -' May 13 22:00:58.997: INFO: stderr: "" May 13 22:00:58.997: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 May 13 22:00:59.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8118 delete pods e2e-test-httpd-pod' May 13 22:01:12.670: INFO: stderr: "" May 13 22:01:12.670: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:12.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8118" for this suite. • [SLOW TEST:19.477 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":4,"skipped":47,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:04.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:01:05.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:01:07.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076065, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076065, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076065, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076065, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:01:10.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 13 22:01:11.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 13 22:01:12.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 13 22:01:13.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:13.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-922" for this suite. STEP: Destroying namespace "webhook-922-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.380 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":11,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:06.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 13 22:01:06.384: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:01:06.399: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:01:08.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076066, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076066, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076066, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076066, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:01:11.421: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 13 22:01:12.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 13 22:01:13.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:14.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5056" for this suite. STEP: Destroying namespace "webhook-5056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.563 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":3,"skipped":90,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:14.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:01:14.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8660 version' May 13 22:01:14.755: INFO: stderr: "" May 13 22:01:14.755: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:14.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8660" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":4,"skipped":100,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:13.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:19.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9386" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":12,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:12.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 13 22:01:12.109: INFO: Waiting up to 5m0s for pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b" in namespace "emptydir-2626" to be "Succeeded or Failed" May 13 22:01:12.112: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765883ms May 13 22:01:14.116: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00697447s May 13 22:01:16.120: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01121218s May 13 22:01:18.124: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01484368s May 13 22:01:20.127: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018547945s STEP: Saw pod success May 13 22:01:20.128: INFO: Pod "pod-859f5de5-5bb4-4b91-8723-3046578dfc3b" satisfied condition "Succeeded or Failed" May 13 22:01:20.130: INFO: Trying to get logs from node node2 pod pod-859f5de5-5bb4-4b91-8723-3046578dfc3b container test-container: STEP: delete the pod May 13 22:01:20.141: INFO: Waiting for pod pod-859f5de5-5bb4-4b91-8723-3046578dfc3b to disappear May 13 22:01:20.143: INFO: Pod pod-859f5de5-5bb4-4b91-8723-3046578dfc3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:20.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2626" for this suite. • [SLOW TEST:8.100 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":166,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:12.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 13 22:01:13.227: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:01:13.237: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:01:15.245: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:01:17.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:01:19.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076073, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:01:22.255: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:22.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1977" for this suite. STEP: Destroying namespace "webhook-1977-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":59,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:14.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-6649bb70-bdd8-42e1-9ad5-f5ff86139326 STEP: Creating secret with name secret-projected-all-test-volume-45356a1a-e00e-4666-95d4-1a7398ad60f1 STEP: Creating a pod to test Check all projections for projected volume plugin May 13 22:01:14.812: INFO: Waiting up to 5m0s for pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890" in namespace "projected-1325" to be "Succeeded or Failed" May 13 22:01:14.815: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890": Phase="Pending", Reason="", readiness=false. Elapsed: 2.78032ms May 13 22:01:16.819: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006305266s May 13 22:01:18.824: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011325224s May 13 22:01:20.829: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016129213s May 13 22:01:22.833: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020809448s STEP: Saw pod success May 13 22:01:22.833: INFO: Pod "projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890" satisfied condition "Succeeded or Failed" May 13 22:01:22.836: INFO: Trying to get logs from node node2 pod projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890 container projected-all-volume-test: STEP: delete the pod May 13 22:01:22.903: INFO: Waiting for pod projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890 to disappear May 13 22:01:22.905: INFO: Pod projected-volume-c5bec22e-aeab-4573-b2b1-286d7c3a5890 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:22.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1325" for this suite. • [SLOW TEST:8.137 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":105,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:58.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 13 22:00:58.720: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5649" for this suite. • [SLOW TEST:25.351 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":18,"skipped":307,"failed":0} [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:24.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates May 13 22:01:24.076: INFO: created test-podtemplate-1 May 13 22:01:24.079: INFO: created test-podtemplate-2 May 13 22:01:24.082: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 13 22:01:24.085: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 13 22:01:24.094: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:24.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-919" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":19,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:24.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions May 13 22:01:24.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8169 api-versions' May 13 22:01:24.323: INFO: stderr: "" May 13 22:01:24.323: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:24.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8169" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":20,"skipped":352,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:01:19.830: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:01:21.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:01:23.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076079, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:01:26.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 13 22:01:26.866: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:26.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9501" for this suite. STEP: Destroying namespace "webhook-9501-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.397 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":13,"skipped":221,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:26.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:26.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1471" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":14,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:59:04.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 in namespace container-probe-7453 May 13 21:59:08.885: INFO: Started pod liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 in namespace container-probe-7453 STEP: checking the pod's current state and verifying that restartCount is present May 13 21:59:08.888: INFO: Initial restart count of pod liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is 0 May 13 21:59:26.927: INFO: Restart count of pod container-probe-7453/liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is now 1 (18.039296089s elapsed) May 13 21:59:46.966: INFO: Restart count of pod container-probe-7453/liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is now 2 (38.0779383s elapsed) May 13 22:00:07.004: INFO: Restart count of pod container-probe-7453/liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is now 3 (58.115926398s elapsed) May 13 22:00:27.040: INFO: Restart count of pod container-probe-7453/liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is now 4 (1m18.15209314s elapsed) May 13 22:01:27.150: INFO: Restart count of pod container-probe-7453/liveness-b5948c4c-6c87-4a5f-97b7-b5ebe4a4b993 is now 5 (2m18.261657252s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:27.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7453" for this suite. • [SLOW TEST:142.323 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":162,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:22.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command May 13 22:01:22.993: INFO: Waiting up to 5m0s for pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1" in namespace "containers-895" to be "Succeeded or Failed" May 13 22:01:22.995: INFO: Pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.893116ms May 13 22:01:24.998: INFO: Pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005015675s May 13 22:01:27.002: INFO: Pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009025456s May 13 22:01:29.007: INFO: Pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01405425s STEP: Saw pod success May 13 22:01:29.008: INFO: Pod "client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1" satisfied condition "Succeeded or Failed" May 13 22:01:29.010: INFO: Trying to get logs from node node2 pod client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1 container agnhost-container: STEP: delete the pod May 13 22:01:29.044: INFO: Waiting for pod client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1 to disappear May 13 22:01:29.046: INFO: Pod client-containers-ec4c8f8b-576e-449f-b0e9-2a4b64368ac1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:29.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-895" for this suite. • [SLOW TEST:6.093 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":130,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:24.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 13 22:01:24.385: INFO: The status of Pod labelsupdate3280beef-0381-4565-a988-ed2422210d4d is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:26.389: INFO: The status of Pod labelsupdate3280beef-0381-4565-a988-ed2422210d4d is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:28.389: INFO: The status of Pod labelsupdate3280beef-0381-4565-a988-ed2422210d4d is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:30.391: INFO: The status of Pod labelsupdate3280beef-0381-4565-a988-ed2422210d4d is Running (Ready = true) May 13 22:01:30.978: INFO: Successfully updated pod "labelsupdate3280beef-0381-4565-a988-ed2422210d4d" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:32.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6682" for this suite. • [SLOW TEST:8.662 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:27.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:33.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3348" for this suite. • [SLOW TEST:6.076 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":15,"skipped":244,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:27.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-d033d7d8-5bf5-4f1f-af7b-9af9b71040ef STEP: Creating a pod to test consume secrets May 13 22:01:27.206: INFO: Waiting up to 5m0s for pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc" in namespace "secrets-7587" to be "Succeeded or Failed" May 13 22:01:27.208: INFO: Pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428313ms May 13 22:01:29.211: INFO: Pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005215167s May 13 22:01:31.216: INFO: Pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009967271s May 13 22:01:33.219: INFO: Pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012837359s STEP: Saw pod success May 13 22:01:33.219: INFO: Pod "pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc" satisfied condition "Succeeded or Failed" May 13 22:01:33.221: INFO: Trying to get logs from node node2 pod pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc container secret-volume-test: STEP: delete the pod May 13 22:01:33.233: INFO: Waiting for pod pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc to disappear May 13 22:01:33.235: INFO: Pod pod-secrets-c347d212-c892-4f53-8ae0-f5974d7f71bc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:33.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7587" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":164,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:00.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-cfeb8779-580c-44b7-8ccd-25a3fc87a6cb STEP: Creating configMap with name cm-test-opt-upd-f1ea5b5a-87df-4ae8-9775-ade3d3658816 STEP: Creating the pod May 13 22:00:00.609: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:02.613: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:04.612: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:06.614: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:08.613: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:10.613: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Pending, waiting for it to be Running (with Ready = true) May 13 22:00:12.613: INFO: The status of Pod pod-configmaps-f3ccd94b-020a-4950-9b4b-f949dd43d13f is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-cfeb8779-580c-44b7-8ccd-25a3fc87a6cb STEP: Updating configmap cm-test-opt-upd-f1ea5b5a-87df-4ae8-9775-ade3d3658816 STEP: Creating configMap with name cm-test-opt-create-cfe4ad73-b645-47b8-a9d9-e0c404c71ac1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:39.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9394" for this suite. • [SLOW TEST:98.717 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:33.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 13 22:01:33.131: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:45.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5789" for this suite. • [SLOW TEST:12.323 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":16,"skipped":253,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":355,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:33.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components May 13 22:01:33.032: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 13 22:01:33.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:33.422: INFO: stderr: "" May 13 22:01:33.422: INFO: stdout: "service/agnhost-replica created\n" May 13 22:01:33.422: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 13 22:01:33.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:33.759: INFO: stderr: "" May 13 22:01:33.759: INFO: stdout: "service/agnhost-primary created\n" May 13 22:01:33.759: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 13 22:01:33.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:34.135: INFO: stderr: "" May 13 22:01:34.135: INFO: stdout: "service/frontend created\n" May 13 22:01:34.135: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 13 22:01:34.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:34.505: INFO: stderr: "" May 13 22:01:34.505: INFO: stdout: "deployment.apps/frontend created\n" May 13 22:01:34.505: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 13 22:01:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:34.853: INFO: stderr: "" May 13 22:01:34.853: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 13 22:01:34.854: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 13 22:01:34.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 create -f -' May 13 22:01:35.232: INFO: stderr: "" May 13 22:01:35.232: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 13 22:01:35.232: INFO: Waiting for all frontend pods to be Running. May 13 22:01:45.286: INFO: Waiting for frontend to serve content. May 13 22:01:45.294: INFO: Trying to add a new entry to the guestbook. May 13 22:01:45.303: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 13 22:01:45.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:45.460: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:45.461: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 13 22:01:45.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:45.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:45.607: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 13 22:01:45.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:45.759: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:45.759: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 13 22:01:45.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:45.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:45.885: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 13 22:01:45.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:46.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:46.032: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 13 22:01:46.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2038 delete --grace-period=0 --force -f -' May 13 22:01:46.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:01:46.160: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2038" for this suite. • [SLOW TEST:13.164 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":22,"skipped":355,"failed":0} S ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:46.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:01:46.194: INFO: Creating pod... May 13 22:01:46.208: INFO: Pod Quantity: 1 Status: Pending May 13 22:01:47.212: INFO: Pod Quantity: 1 Status: Pending May 13 22:01:48.213: INFO: Pod Quantity: 1 Status: Pending May 13 22:01:49.212: INFO: Pod Status: Running May 13 22:01:49.212: INFO: Creating service... May 13 22:01:49.219: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/DELETE May 13 22:01:49.222: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 13 22:01:49.222: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/GET May 13 22:01:49.225: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 13 22:01:49.225: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/HEAD May 13 22:01:49.227: INFO: http.Client request:HEAD | StatusCode:200 May 13 22:01:49.227: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/OPTIONS May 13 22:01:49.229: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 13 22:01:49.229: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/PATCH May 13 22:01:49.232: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 13 22:01:49.232: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/POST May 13 22:01:49.235: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 13 22:01:49.235: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/pods/agnhost/proxy/some/path/with/PUT May 13 22:01:49.236: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT May 13 22:01:49.237: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/DELETE May 13 22:01:49.240: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 13 22:01:49.240: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/GET May 13 22:01:49.243: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 13 22:01:49.243: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/HEAD May 13 22:01:49.245: INFO: http.Client request:HEAD | StatusCode:200 May 13 22:01:49.245: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/OPTIONS May 13 22:01:49.248: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 13 22:01:49.248: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/PATCH May 13 22:01:49.251: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 13 22:01:49.251: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/POST May 13 22:01:49.254: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 13 22:01:49.254: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-9654/services/test-service/proxy/some/path/with/PUT May 13 22:01:49.257: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:49.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9654" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":23,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:45.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 13 22:01:45.502: INFO: Waiting up to 5m0s for pod "pod-80912e24-cd31-449b-bdfc-3c82de232256" in namespace "emptydir-2884" to be "Succeeded or Failed" May 13 22:01:45.509: INFO: Pod "pod-80912e24-cd31-449b-bdfc-3c82de232256": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570783ms May 13 22:01:47.511: INFO: Pod "pod-80912e24-cd31-449b-bdfc-3c82de232256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008937s May 13 22:01:49.514: INFO: Pod "pod-80912e24-cd31-449b-bdfc-3c82de232256": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012098036s May 13 22:01:51.518: INFO: Pod "pod-80912e24-cd31-449b-bdfc-3c82de232256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01609208s STEP: Saw pod success May 13 22:01:51.518: INFO: Pod "pod-80912e24-cd31-449b-bdfc-3c82de232256" satisfied condition "Succeeded or Failed" May 13 22:01:51.521: INFO: Trying to get logs from node node2 pod pod-80912e24-cd31-449b-bdfc-3c82de232256 container test-container: STEP: delete the pod May 13 22:01:51.533: INFO: Waiting for pod pod-80912e24-cd31-449b-bdfc-3c82de232256 to disappear May 13 22:01:51.535: INFO: Pod pod-80912e24-cd31-449b-bdfc-3c82de232256 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:51.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2884" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":269,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:55.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4055" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":282,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:55.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:01:55.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5" in namespace "projected-8039" to be "Succeeded or Failed" May 13 22:01:55.677: INFO: Pod "downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201068ms May 13 22:01:57.681: INFO: Pod "downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006214658s May 13 22:01:59.684: INFO: Pod "downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008778366s STEP: Saw pod success May 13 22:01:59.684: INFO: Pod "downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5" satisfied condition "Succeeded or Failed" May 13 22:01:59.686: INFO: Trying to get logs from node node1 pod downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5 container client-container: STEP: delete the pod May 13 22:01:59.698: INFO: Waiting for pod downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5 to disappear May 13 22:01:59.699: INFO: Pod downwardapi-volume-522dfeaf-68d2-43c5-be47-8f2ca02c6eb5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:01:59.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8039" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:59.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 13 22:01:59.848: INFO: Waiting up to 5m0s for pod "pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b" in namespace "emptydir-2585" to be "Succeeded or Failed" May 13 22:01:59.850: INFO: Pod "pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20715ms May 13 22:02:01.853: INFO: Pod "pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005122503s May 13 22:02:03.857: INFO: Pod "pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008854157s STEP: Saw pod success May 13 22:02:03.857: INFO: Pod "pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b" satisfied condition "Succeeded or Failed" May 13 22:02:03.860: INFO: Trying to get logs from node node2 pod pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b container test-container: STEP: delete the pod May 13 22:02:03.876: INFO: Waiting for pod pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b to disappear May 13 22:02:03.879: INFO: Pod pod-3f2b11cb-4e8d-498a-94eb-5fddbbbc821b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:03.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2585" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:33.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:01:33.317: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:35.320: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:37.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:39.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:41.322: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:43.322: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:45.324: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:47.326: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:49.319: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:51.320: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:53.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:55.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:57.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:01:59.320: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:02:01.320: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:02:03.319: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = false) May 13 22:02:05.321: INFO: The status of Pod test-webserver-91642f17-ca2f-4c7a-bfa3-36b146c699c1 is Running (Ready = true) May 13 22:02:05.324: INFO: Container started at 2022-05-13 22:01:40 +0000 UTC, pod became ready at 2022-05-13 22:02:03 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:05.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4853" for this suite. • [SLOW TEST:32.060 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:49.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:01:49.364: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 13 22:01:58.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 create -f -' May 13 22:01:58.508: INFO: stderr: "" May 13 22:01:58.508: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 13 22:01:58.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 delete e2e-test-crd-publish-openapi-1848-crds test-foo' May 13 22:01:58.663: INFO: stderr: "" May 13 22:01:58.663: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 13 22:01:58.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 apply -f -' May 13 22:01:59.034: INFO: stderr: "" May 13 22:01:59.035: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 13 22:01:59.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 delete e2e-test-crd-publish-openapi-1848-crds test-foo' May 13 22:01:59.204: INFO: stderr: "" May 13 22:01:59.204: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 13 22:01:59.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 create -f -' May 13 22:01:59.533: INFO: rc: 1 May 13 22:01:59.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 apply -f -' May 13 22:01:59.866: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 13 22:01:59.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 create -f -' May 13 22:02:00.219: INFO: rc: 1 May 13 22:02:00.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 --namespace=crd-publish-openapi-9551 apply -f -' May 13 22:02:00.554: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 13 22:02:00.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 explain e2e-test-crd-publish-openapi-1848-crds' May 13 22:02:00.886: INFO: stderr: "" May 13 22:02:00.886: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 13 22:02:00.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 explain e2e-test-crd-publish-openapi-1848-crds.metadata' May 13 22:02:01.247: INFO: stderr: "" May 13 22:02:01.247: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 13 22:02:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 explain e2e-test-crd-publish-openapi-1848-crds.spec' May 13 22:02:01.602: INFO: stderr: "" May 13 22:02:01.602: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 13 22:02:01.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 explain e2e-test-crd-publish-openapi-1848-crds.spec.bars' May 13 22:02:01.986: INFO: stderr: "" May 13 22:02:01.986: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 13 22:02:01.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9551 explain e2e-test-crd-publish-openapi-1848-crds.spec.bars2' May 13 22:02:02.353: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:06.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9551" for this suite. • [SLOW TEST:16.682 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":24,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:05.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[] May 13 22:02:05.410: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 13 22:02:06.419: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5564 May 13 22:02:06.433: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:08.436: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:10.437: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod1:[80]] May 13 22:02:10.447: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-5564 May 13 22:02:10.462: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:12.465: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:14.467: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod1:[80] pod2:[80]] May 13 22:02:14.480: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod2:[80]] May 13 22:02:14.495: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[] May 13 22:02:14.505: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:14.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5564" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.145 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":10,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:14.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods May 13 22:02:14.655: INFO: created test-pod-1 May 13 22:02:14.663: INFO: created test-pod-2 May 13 22:02:14.672: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:14.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3095" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":11,"skipped":246,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 21:58:09.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0513 21:58:09.339882 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 21:58:09.340: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 21:58:09.341: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-e33fcb59-7a0e-46aa-8f4a-abcb91fcfab7 in namespace container-probe-1338 May 13 21:58:19.363: INFO: Started pod busybox-e33fcb59-7a0e-46aa-8f4a-abcb91fcfab7 in namespace container-probe-1338 STEP: checking the pod's current state and verifying that restartCount is present May 13 21:58:19.365: INFO: Initial restart count of pod busybox-e33fcb59-7a0e-46aa-8f4a-abcb91fcfab7 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:20.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1338" for this suite. • [SLOW TEST:251.089 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:20.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:20.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6568" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:29.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-81200924-abc3-4fbc-b5b3-2ea858a8c28d in namespace container-probe-1097 May 13 22:01:33.099: INFO: Started pod busybox-81200924-abc3-4fbc-b5b3-2ea858a8c28d in namespace container-probe-1097 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:01:33.101: INFO: Initial restart count of pod busybox-81200924-abc3-4fbc-b5b3-2ea858a8c28d is 0 May 13 22:02:21.211: INFO: Restart count of pod container-probe-1097/busybox-81200924-abc3-4fbc-b5b3-2ea858a8c28d is now 1 (48.110000007s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:21.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1097" for this suite. • [SLOW TEST:52.166 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":131,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:21.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added May 13 22:02:21.275: INFO: Found Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] May 13 22:02:21.275: INFO: Service test-service-ck7b6 created STEP: Getting /status May 13 22:02:21.278: INFO: Service test-service-ck7b6 has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched May 13 22:02:21.284: INFO: observed Service test-service-ck7b6 in namespace services-3122 with annotations: map[] & LoadBalancer: {[]} May 13 22:02:21.284: INFO: Found Service test-service-ck7b6 in namespace services-3122 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} May 13 22:02:21.284: INFO: Service test-service-ck7b6 has service status patched STEP: updating the ServiceStatus May 13 22:02:21.289: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated May 13 22:02:21.290: INFO: Observed Service test-service-ck7b6 in namespace services-3122 with annotations: map[] & Conditions: {[]} May 13 22:02:21.290: INFO: Observed event: &Service{ObjectMeta:{test-service-ck7b6 services-3122 3977cc67-557e-4bb2-8952-cbbeeaf4e23c 39441 0 2022-05-13 22:02:21 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-05-13 22:02:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.43.133,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.43.133],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} May 13 22:02:21.290: INFO: Found Service test-service-ck7b6 in namespace services-3122 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] May 13 22:02:21.290: INFO: Service test-service-ck7b6 has service status updated STEP: patching the service STEP: watching for the Service to be patched May 13 22:02:21.304: INFO: observed Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service-static:true] May 13 22:02:21.304: INFO: observed Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service-static:true] May 13 22:02:21.304: INFO: observed Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service-static:true] May 13 22:02:21.304: INFO: Found Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service:patched test-service-static:true] May 13 22:02:21.304: INFO: Service test-service-ck7b6 patched STEP: deleting the service STEP: watching for the Service to be deleted May 13 22:02:21.312: INFO: Observed event: ADDED May 13 22:02:21.312: INFO: Observed event: MODIFIED May 13 22:02:21.312: INFO: Observed event: MODIFIED May 13 22:02:21.313: INFO: Observed event: MODIFIED May 13 22:02:21.313: INFO: Found Service test-service-ck7b6 in namespace services-3122 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] May 13 22:02:21.313: INFO: Service test-service-ck7b6 deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:21.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3122" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":140,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:14.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-6188 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6188 to expose endpoints map[] May 13 22:02:14.762: INFO: successfully validated that service multi-endpoint-test in namespace services-6188 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6188 May 13 22:02:14.774: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:16.777: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:18.777: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:20.777: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6188 to expose endpoints map[pod1:[100]] May 13 22:02:20.787: INFO: successfully validated that service multi-endpoint-test in namespace services-6188 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6188 May 13 22:02:20.799: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:22.803: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:24.803: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6188 to expose endpoints map[pod1:[100] pod2:[101]] May 13 22:02:24.819: INFO: successfully validated that service multi-endpoint-test in namespace services-6188 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6188 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6188 to expose endpoints map[pod2:[101]] May 13 22:02:24.834: INFO: successfully validated that service multi-endpoint-test in namespace services-6188 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6188 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6188 to expose endpoints map[] May 13 22:02:24.845: INFO: successfully validated that service multi-endpoint-test in namespace services-6188 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:24.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6188" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.133 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":12,"skipped":247,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:21.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-1e14671a-ff88-473a-91d3-01545329017e STEP: Creating a pod to test consume configMaps May 13 22:02:21.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a" in namespace "configmap-3167" to be "Succeeded or Failed" May 13 22:02:21.375: INFO: Pod "pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.88993ms May 13 22:02:23.378: INFO: Pod "pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005016454s May 13 22:02:25.382: INFO: Pod "pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009128913s STEP: Saw pod success May 13 22:02:25.382: INFO: Pod "pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a" satisfied condition "Succeeded or Failed" May 13 22:02:25.384: INFO: Trying to get logs from node node2 pod pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a container agnhost-container: STEP: delete the pod May 13 22:02:25.396: INFO: Waiting for pod pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a to disappear May 13 22:02:25.397: INFO: Pod pod-configmaps-38ba48d6-d583-434b-a404-b5428d5fb61a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:25.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3167" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":151,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:24.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all May 13 22:02:24.907: INFO: Waiting up to 5m0s for pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7" in namespace "containers-5553" to be "Succeeded or Failed" May 13 22:02:24.909: INFO: Pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267262ms May 13 22:02:26.914: INFO: Pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006968786s May 13 22:02:28.918: INFO: Pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010598111s May 13 22:02:30.922: INFO: Pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01498857s STEP: Saw pod success May 13 22:02:30.922: INFO: Pod "client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7" satisfied condition "Succeeded or Failed" May 13 22:02:30.924: INFO: Trying to get logs from node node2 pod client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7 container agnhost-container: STEP: delete the pod May 13 22:02:30.936: INFO: Waiting for pod client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7 to disappear May 13 22:02:30.938: INFO: Pod client-containers-3402d273-b6c6-4425-96f0-b8f25cefebd7 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:30.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5553" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":253,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:25.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:02:25.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f" in namespace "downward-api-6593" to be "Succeeded or Failed" May 13 22:02:25.482: INFO: Pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.791936ms May 13 22:02:27.486: INFO: Pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005287417s May 13 22:02:29.489: INFO: Pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009227653s May 13 22:02:31.494: INFO: Pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013334531s STEP: Saw pod success May 13 22:02:31.494: INFO: Pod "downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f" satisfied condition "Succeeded or Failed" May 13 22:02:31.496: INFO: Trying to get logs from node node2 pod downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f container client-container: STEP: delete the pod May 13 22:02:31.508: INFO: Waiting for pod downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f to disappear May 13 22:02:31.510: INFO: Pod downwardapi-volume-5663b2fc-d30a-4a2c-b694-1f4eb11b636f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:31.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6593" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":170,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:31.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-cd5e633e-6101-4c35-9ed9-38873c5f59c4 STEP: Creating a pod to test consume secrets May 13 22:02:31.572: INFO: Waiting up to 5m0s for pod "pod-secrets-f05603ac-9081-4728-946f-ff159b996888" in namespace "secrets-3327" to be "Succeeded or Failed" May 13 22:02:31.574: INFO: Pod "pod-secrets-f05603ac-9081-4728-946f-ff159b996888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188012ms May 13 22:02:33.578: INFO: Pod "pod-secrets-f05603ac-9081-4728-946f-ff159b996888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005725548s May 13 22:02:35.582: INFO: Pod "pod-secrets-f05603ac-9081-4728-946f-ff159b996888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010148766s STEP: Saw pod success May 13 22:02:35.582: INFO: Pod "pod-secrets-f05603ac-9081-4728-946f-ff159b996888" satisfied condition "Succeeded or Failed" May 13 22:02:35.585: INFO: Trying to get logs from node node2 pod pod-secrets-f05603ac-9081-4728-946f-ff159b996888 container secret-volume-test: STEP: delete the pod May 13 22:02:35.599: INFO: Waiting for pod pod-secrets-f05603ac-9081-4728-946f-ff159b996888 to disappear May 13 22:02:35.600: INFO: Pod pod-secrets-f05603ac-9081-4728-946f-ff159b996888 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3327" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":173,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:20.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled May 13 22:02:20.535: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:22.539: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:24.539: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled May 13 22:02:24.552: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:26.556: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:28.556: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides May 13 22:02:28.569: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:30.573: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:32.571: INFO: The status of Pod pod3 is Running (Ready = true) May 13 22:02:32.583: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:34.587: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 May 13 22:02:34.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-5370 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:02:34.589: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 May 13 22:02:34.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-5370 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:02:34.678: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP May 13 22:02:34.798: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-5370 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:02:34.798: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:39.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-5370" for this suite. • [SLOW TEST:19.393 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:20.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-9c1d01c3-ac84-471e-9a29-0de455cf323f STEP: Creating secret with name s-test-opt-upd-dacd9a1c-3328-4289-8f7d-5a278bbc9582 STEP: Creating the pod May 13 22:01:20.209: INFO: The status of Pod pod-secrets-ef839275-b8dd-4763-81f1-923e7ac65d82 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:22.212: INFO: The status of Pod pod-secrets-ef839275-b8dd-4763-81f1-923e7ac65d82 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:24.213: INFO: The status of Pod pod-secrets-ef839275-b8dd-4763-81f1-923e7ac65d82 is Pending, waiting for it to be Running (with Ready = true) May 13 22:01:26.214: INFO: The status of Pod pod-secrets-ef839275-b8dd-4763-81f1-923e7ac65d82 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-9c1d01c3-ac84-471e-9a29-0de455cf323f STEP: Updating secret s-test-opt-upd-dacd9a1c-3328-4289-8f7d-5a278bbc9582 STEP: Creating secret with name s-test-opt-create-1a2e6b0c-9733-49de-8905-84b9d2acac4d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:41.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4931" for this suite. • [SLOW TEST:81.236 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":169,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:41.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:41.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8728" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":16,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:04.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics May 13 22:02:44.085: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 22:02:44.257: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 13 22:02:44.257: INFO: Deleting pod "simpletest.rc-66fr9" in namespace "gc-5861" May 13 22:02:44.266: INFO: Deleting pod "simpletest.rc-862ld" in namespace "gc-5861" May 13 22:02:44.272: INFO: Deleting pod "simpletest.rc-dj4s4" in namespace "gc-5861" May 13 22:02:44.277: INFO: Deleting pod "simpletest.rc-fdstp" in namespace "gc-5861" May 13 22:02:44.282: INFO: Deleting pod "simpletest.rc-hglxb" in namespace "gc-5861" May 13 22:02:44.288: INFO: Deleting pod "simpletest.rc-qc4cj" in namespace "gc-5861" May 13 22:02:44.294: INFO: Deleting pod "simpletest.rc-qndh8" in namespace "gc-5861" May 13 22:02:44.299: INFO: Deleting pod "simpletest.rc-qzxjb" in namespace "gc-5861" May 13 22:02:44.304: INFO: Deleting pod "simpletest.rc-x4hkc" in namespace "gc-5861" May 13 22:02:44.311: INFO: Deleting pod "simpletest.rc-z8fg6" in namespace "gc-5861" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:44.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5861" for this suite. • [SLOW TEST:40.323 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":21,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:44.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created May 13 22:02:44.473: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:46.477: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:48.477: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:50.476: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 13 22:02:51.491: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1861" for this suite. • [SLOW TEST:8.112 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":22,"skipped":450,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:35.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:52.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5271" for this suite. • [SLOW TEST:17.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":12,"skipped":175,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:06.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 13 22:02:06.099: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 13 22:02:25.029: INFO: >>> kubeConfig: /root/.kube/config May 13 22:02:33.637: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:52.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2791" for this suite. • [SLOW TEST:46.781 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":25,"skipped":418,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:52.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 22:02:55.607: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:55.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5077" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":451,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:52.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 13 22:02:52.739: INFO: Waiting up to 5m0s for pod "downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877" in namespace "downward-api-5990" to be "Succeeded or Failed" May 13 22:02:52.742: INFO: Pod "downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491527ms May 13 22:02:54.745: INFO: Pod "downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005174444s May 13 22:02:56.748: INFO: Pod "downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008947561s STEP: Saw pod success May 13 22:02:56.749: INFO: Pod "downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877" satisfied condition "Succeeded or Failed" May 13 22:02:56.751: INFO: Trying to get logs from node node2 pod downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877 container dapi-container: STEP: delete the pod May 13 22:02:56.764: INFO: Waiting for pod downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877 to disappear May 13 22:02:56.766: INFO: Pod downward-api-612cb585-3ddd-4e16-99ee-cbee3df27877 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:02:56.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5990" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":189,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:55.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 13 22:02:55.667: INFO: Waiting up to 5m0s for pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794" in namespace "downward-api-3935" to be "Succeeded or Failed" May 13 22:02:55.669: INFO: Pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794": Phase="Pending", Reason="", readiness=false. Elapsed: 1.767878ms May 13 22:02:57.673: INFO: Pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006287115s May 13 22:02:59.676: INFO: Pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00939009s May 13 22:03:01.681: INFO: Pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013957262s STEP: Saw pod success May 13 22:03:01.681: INFO: Pod "downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794" satisfied condition "Succeeded or Failed" May 13 22:03:01.684: INFO: Trying to get logs from node node1 pod downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794 container dapi-container: STEP: delete the pod May 13 22:03:01.696: INFO: Waiting for pod downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794 to disappear May 13 22:03:01.699: INFO: Pod downward-api-fdcb2de4-701b-4988-a32a-96ca52ca7794 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:01.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3935" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":453,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:39.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mdm56 in namespace proxy-6985 I0513 22:02:39.974385 28 runners.go:190] Created replication controller with name: proxy-service-mdm56, namespace: proxy-6985, replica count: 1 I0513 22:02:41.024745 28 runners.go:190] proxy-service-mdm56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:02:42.025221 28 runners.go:190] proxy-service-mdm56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:02:43.026448 28 runners.go:190] proxy-service-mdm56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:02:44.027603 28 runners.go:190] proxy-service-mdm56 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:02:44.030: INFO: setup took 4.06555011s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.611549ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.695507ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 3.841482ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.710367ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.911614ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 3.902753ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.793826ms) May 13 22:02:44.034: INFO: (0) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.826782ms) May 13 22:02:44.035: INFO: (0) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 4.179292ms) May 13 22:02:44.035: INFO: (0) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 4.200421ms) May 13 22:02:44.037: INFO: (0) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 6.949577ms) May 13 22:02:44.039: INFO: (0) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 8.88335ms) May 13 22:02:44.039: INFO: (0) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 8.791835ms) May 13 22:02:44.039: INFO: (0) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 8.807247ms) May 13 22:02:44.039: INFO: (0) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 2.498043ms) May 13 22:02:44.042: INFO: (1) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.427569ms) May 13 22:02:44.042: INFO: (1) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.625295ms) May 13 22:02:44.042: INFO: (1) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.463442ms) May 13 22:02:44.042: INFO: (1) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.542148ms) May 13 22:02:44.042: INFO: (1) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 2.82047ms) May 13 22:02:44.043: INFO: (1) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 1.955016ms) May 13 22:02:44.046: INFO: (2) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.204396ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.556206ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.391417ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.598005ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.666856ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 2.788049ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.805358ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.102173ms) May 13 22:02:44.047: INFO: (2) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test (200; 3.198494ms) May 13 22:02:44.048: INFO: (2) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.871777ms) May 13 22:02:44.048: INFO: (2) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.773102ms) May 13 22:02:44.048: INFO: (2) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.874596ms) May 13 22:02:44.050: INFO: (3) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test (200; 2.472581ms) May 13 22:02:44.051: INFO: (3) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.434361ms) May 13 22:02:44.051: INFO: (3) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 2.719635ms) May 13 22:02:44.051: INFO: (3) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.626173ms) May 13 22:02:44.051: INFO: (3) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.683929ms) May 13 22:02:44.051: INFO: (3) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 2.727007ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.223464ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 3.432168ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.188484ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.248737ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.641731ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.803554ms) May 13 22:02:44.052: INFO: (3) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 4.206369ms) May 13 22:02:44.053: INFO: (3) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 4.514306ms) May 13 22:02:44.055: INFO: (4) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.294309ms) May 13 22:02:44.055: INFO: (4) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.206543ms) May 13 22:02:44.055: INFO: (4) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 2.895745ms) May 13 22:02:44.056: INFO: (4) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.808873ms) May 13 22:02:44.056: INFO: (4) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.996241ms) May 13 22:02:44.056: INFO: (4) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.161982ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 3.461491ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.619783ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.725697ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.912943ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.958538ms) May 13 22:02:44.057: INFO: (4) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 4.140098ms) May 13 22:02:44.059: INFO: (4) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 6.004525ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.16011ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.370635ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 2.86143ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.982793ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.070797ms) May 13 22:02:44.062: INFO: (5) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.985528ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.122027ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.675344ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.448119ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.853569ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.923942ms) May 13 22:02:44.063: INFO: (5) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.930143ms) May 13 22:02:44.064: INFO: (5) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 4.429363ms) May 13 22:02:44.066: INFO: (6) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 2.38904ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.511336ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 2.976885ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.998369ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.020263ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 3.138772ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 3.076268ms) May 13 22:02:44.067: INFO: (6) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.266247ms) May 13 22:02:44.068: INFO: (6) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.891796ms) May 13 22:02:44.068: INFO: (6) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.696857ms) May 13 22:02:44.068: INFO: (6) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.688743ms) May 13 22:02:44.068: INFO: (6) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.747403ms) May 13 22:02:44.070: INFO: (7) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 1.974415ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.185866ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.686054ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.928195ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.824475ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.097221ms) May 13 22:02:44.071: INFO: (7) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 3.624322ms) May 13 22:02:44.072: INFO: (7) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.416003ms) May 13 22:02:44.072: INFO: (7) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.562805ms) May 13 22:02:44.072: INFO: (7) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.504474ms) May 13 22:02:44.072: INFO: (7) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.564146ms) May 13 22:02:44.072: INFO: (7) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.792326ms) May 13 22:02:44.073: INFO: (7) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 4.1645ms) May 13 22:02:44.075: INFO: (8) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.399064ms) May 13 22:02:44.075: INFO: (8) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.281659ms) May 13 22:02:44.075: INFO: (8) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.521801ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.745559ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.711509ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.953647ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.911442ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.072225ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.871595ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.064889ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.243287ms) May 13 22:02:44.076: INFO: (8) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 2.192619ms) May 13 22:02:44.079: INFO: (9) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.482887ms) May 13 22:02:44.080: INFO: (9) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.786216ms) May 13 22:02:44.080: INFO: (9) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.637ms) May 13 22:02:44.080: INFO: (9) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.545569ms) May 13 22:02:44.080: INFO: (9) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.863012ms) May 13 22:02:44.080: INFO: (9) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 2.421468ms) May 13 22:02:44.084: INFO: (10) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.638158ms) May 13 22:02:44.084: INFO: (10) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.649584ms) May 13 22:02:44.084: INFO: (10) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.648672ms) May 13 22:02:44.085: INFO: (10) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test (200; 2.352437ms) May 13 22:02:44.089: INFO: (11) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.297578ms) May 13 22:02:44.089: INFO: (11) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.481607ms) May 13 22:02:44.089: INFO: (11) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.388793ms) May 13 22:02:44.089: INFO: (11) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.377123ms) May 13 22:02:44.089: INFO: (11) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 2.392677ms) May 13 22:02:44.093: INFO: (12) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.56223ms) May 13 22:02:44.093: INFO: (12) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.528594ms) May 13 22:02:44.093: INFO: (12) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 2.744813ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.296749ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 3.252317ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 3.26417ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 3.283364ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.529992ms) May 13 22:02:44.094: INFO: (12) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.763304ms) May 13 22:02:44.095: INFO: (12) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.849886ms) May 13 22:02:44.095: INFO: (12) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 4.123707ms) May 13 22:02:44.095: INFO: (12) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 4.272041ms) May 13 22:02:44.097: INFO: (13) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.005495ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.23599ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.525066ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 2.990715ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.847986ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 2.793431ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.939428ms) May 13 22:02:44.098: INFO: (13) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.914546ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.752221ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.660103ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 4.169269ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 4.10871ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 4.052232ms) May 13 22:02:44.099: INFO: (13) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test (200; 3.817692ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 4.020674ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 4.233269ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 4.394436ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 4.281053ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 4.51664ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 4.497279ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 4.629617ms) May 13 22:02:44.104: INFO: (14) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 2.275928ms) May 13 22:02:44.107: INFO: (15) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.368524ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.635596ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.684158ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.639658ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 2.87835ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.994042ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.143177ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.370091ms) May 13 22:02:44.108: INFO: (15) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.322162ms) May 13 22:02:44.109: INFO: (15) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.79227ms) May 13 22:02:44.109: INFO: (15) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test<... (200; 5.931245ms) May 13 22:02:44.117: INFO: (16) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: test (200; 7.325343ms) May 13 22:02:44.117: INFO: (16) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 7.499845ms) May 13 22:02:44.117: INFO: (16) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 7.390949ms) May 13 22:02:44.117: INFO: (16) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:1080/proxy/: ... (200; 7.32304ms) May 13 22:02:44.124: INFO: (16) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 14.688036ms) May 13 22:02:44.125: INFO: (16) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 15.57011ms) May 13 22:02:44.125: INFO: (16) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 15.66668ms) May 13 22:02:44.126: INFO: (16) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 16.189752ms) May 13 22:02:44.126: INFO: (16) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 16.080726ms) May 13 22:02:44.126: INFO: (16) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 16.352772ms) May 13 22:02:44.128: INFO: (17) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.05058ms) May 13 22:02:44.129: INFO: (17) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.331149ms) May 13 22:02:44.129: INFO: (17) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.687739ms) May 13 22:02:44.129: INFO: (17) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.552691ms) May 13 22:02:44.129: INFO: (17) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 2.866663ms) May 13 22:02:44.129: INFO: (17) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 2.981486ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.117622ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 3.24821ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 3.207094ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.513036ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.592988ms) May 13 22:02:44.130: INFO: (17) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 3.895293ms) May 13 22:02:44.131: INFO: (17) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 4.355217ms) May 13 22:02:44.131: INFO: (17) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 4.36029ms) May 13 22:02:44.133: INFO: (18) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 2.066848ms) May 13 22:02:44.133: INFO: (18) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 3.082343ms) May 13 22:02:44.134: INFO: (18) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 3.329156ms) May 13 22:02:44.134: INFO: (18) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 3.174218ms) May 13 22:02:44.134: INFO: (18) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.340562ms) May 13 22:02:44.135: INFO: (18) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 3.904434ms) May 13 22:02:44.135: INFO: (18) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 3.890066ms) May 13 22:02:44.135: INFO: (18) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.951354ms) May 13 22:02:44.135: INFO: (18) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 4.49224ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.006525ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:462/proxy/: tls qux (200; 2.252527ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:1080/proxy/: test<... (200; 2.356737ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:460/proxy/: tls baz (200; 2.302245ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/https:proxy-service-mdm56-jkj7t:443/proxy/: ... (200; 2.678864ms) May 13 22:02:44.138: INFO: (19) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:160/proxy/: foo (200; 2.612538ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.893035ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname2/proxy/: tls qux (200; 2.96092ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/pods/http:proxy-service-mdm56-jkj7t:162/proxy/: bar (200; 2.913337ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/pods/proxy-service-mdm56-jkj7t/proxy/: test (200; 3.247572ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname1/proxy/: foo (200; 3.584961ms) May 13 22:02:44.139: INFO: (19) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname2/proxy/: bar (200; 3.708368ms) May 13 22:02:44.140: INFO: (19) /api/v1/namespaces/proxy-6985/services/proxy-service-mdm56:portname1/proxy/: foo (200; 4.111581ms) May 13 22:02:44.140: INFO: (19) /api/v1/namespaces/proxy-6985/services/https:proxy-service-mdm56:tlsportname1/proxy/: tls baz (200; 4.079046ms) May 13 22:02:44.140: INFO: (19) /api/v1/namespaces/proxy-6985/services/http:proxy-service-mdm56:portname2/proxy/: bar (200; 4.207412ms) STEP: deleting ReplicationController proxy-service-mdm56 in namespace proxy-6985, will wait for the garbage collector to delete the pods May 13 22:02:44.197: INFO: Deleting ReplicationController proxy-service-mdm56 took: 4.463877ms May 13 22:02:44.298: INFO: Terminating ReplicationController proxy-service-mdm56 pods took: 100.976675ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:02.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6985" for this suite. • [SLOW TEST:22.465 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":4,"skipped":59,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:56.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cdd05923-4011-460d-bacc-d0d53602cd4c STEP: Creating a pod to test consume secrets May 13 22:02:56.864: INFO: Waiting up to 5m0s for pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c" in namespace "secrets-188" to be "Succeeded or Failed" May 13 22:02:56.866: INFO: Pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617931ms May 13 22:02:58.870: INFO: Pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006225961s May 13 22:03:00.875: INFO: Pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01120049s May 13 22:03:02.878: INFO: Pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0137907s STEP: Saw pod success May 13 22:03:02.878: INFO: Pod "pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c" satisfied condition "Succeeded or Failed" May 13 22:03:02.880: INFO: Trying to get logs from node node1 pod pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c container secret-volume-test: STEP: delete the pod May 13 22:03:02.896: INFO: Waiting for pod pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c to disappear May 13 22:03:02.898: INFO: Pod pod-secrets-a2f980f3-d2af-49b7-ad4a-f5241438848c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:02.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-188" for this suite. STEP: Destroying namespace "secret-namespace-2773" for this suite. • [SLOW TEST:6.113 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":197,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:03.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:03.047: INFO: The status of Pod busybox-readonly-fs990f607c-e4ec-485e-a2e5-2d81db8570da is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:05.051: INFO: The status of Pod busybox-readonly-fs990f607c-e4ec-485e-a2e5-2d81db8570da is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:07.052: INFO: The status of Pod busybox-readonly-fs990f607c-e4ec-485e-a2e5-2d81db8570da is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:07.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2630" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":244,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:41.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 13 22:02:41.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 create -f -' May 13 22:02:41.884: INFO: stderr: "" May 13 22:02:41.884: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 22:02:41.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:02:42.051: INFO: stderr: "" May 13 22:02:42.051: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-w6jhr " May 13 22:02:42.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:02:42.188: INFO: stderr: "" May 13 22:02:42.188: INFO: stdout: "" May 13 22:02:42.188: INFO: update-demo-nautilus-qgszz is created but not running May 13 22:02:47.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:02:47.350: INFO: stderr: "" May 13 22:02:47.350: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-w6jhr " May 13 22:02:47.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:02:47.506: INFO: stderr: "" May 13 22:02:47.506: INFO: stdout: "" May 13 22:02:47.506: INFO: update-demo-nautilus-qgszz is created but not running May 13 22:02:52.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:02:52.674: INFO: stderr: "" May 13 22:02:52.674: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-w6jhr " May 13 22:02:52.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:02:52.823: INFO: stderr: "" May 13 22:02:52.823: INFO: stdout: "true" May 13 22:02:52.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:02:52.972: INFO: stderr: "" May 13 22:02:52.972: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:02:52.972: INFO: validating pod update-demo-nautilus-qgszz May 13 22:02:52.975: INFO: got data: { "image": "nautilus.jpg" } May 13 22:02:52.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:02:52.975: INFO: update-demo-nautilus-qgszz is verified up and running May 13 22:02:52.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-w6jhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:02:53.150: INFO: stderr: "" May 13 22:02:53.150: INFO: stdout: "true" May 13 22:02:53.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-w6jhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:02:53.322: INFO: stderr: "" May 13 22:02:53.322: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:02:53.322: INFO: validating pod update-demo-nautilus-w6jhr May 13 22:02:53.325: INFO: got data: { "image": "nautilus.jpg" } May 13 22:02:53.325: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:02:53.325: INFO: update-demo-nautilus-w6jhr is verified up and running STEP: scaling down the replication controller May 13 22:02:53.334: INFO: scanned /root for discovery docs: May 13 22:02:53.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 13 22:02:53.545: INFO: stderr: "" May 13 22:02:53.545: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 22:02:53.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:02:53.724: INFO: stderr: "" May 13 22:02:53.724: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-w6jhr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 13 22:02:58.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:02:58.901: INFO: stderr: "" May 13 22:02:58.901: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-w6jhr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 13 22:03:03.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:03:04.074: INFO: stderr: "" May 13 22:03:04.074: INFO: stdout: "update-demo-nautilus-qgszz " May 13 22:03:04.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:03:04.255: INFO: stderr: "" May 13 22:03:04.255: INFO: stdout: "true" May 13 22:03:04.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:03:04.428: INFO: stderr: "" May 13 22:03:04.428: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:03:04.428: INFO: validating pod update-demo-nautilus-qgszz May 13 22:03:04.430: INFO: got data: { "image": "nautilus.jpg" } May 13 22:03:04.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:03:04.430: INFO: update-demo-nautilus-qgszz is verified up and running STEP: scaling up the replication controller May 13 22:03:04.439: INFO: scanned /root for discovery docs: May 13 22:03:04.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 13 22:03:04.652: INFO: stderr: "" May 13 22:03:04.652: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 13 22:03:04.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:03:04.823: INFO: stderr: "" May 13 22:03:04.823: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-tgw67 " May 13 22:03:04.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:03:04.992: INFO: stderr: "" May 13 22:03:04.992: INFO: stdout: "true" May 13 22:03:04.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:03:05.178: INFO: stderr: "" May 13 22:03:05.178: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:03:05.178: INFO: validating pod update-demo-nautilus-qgszz May 13 22:03:05.181: INFO: got data: { "image": "nautilus.jpg" } May 13 22:03:05.182: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:03:05.182: INFO: update-demo-nautilus-qgszz is verified up and running May 13 22:03:05.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-tgw67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:03:05.335: INFO: stderr: "" May 13 22:03:05.336: INFO: stdout: "" May 13 22:03:05.336: INFO: update-demo-nautilus-tgw67 is created but not running May 13 22:03:10.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 13 22:03:10.506: INFO: stderr: "" May 13 22:03:10.506: INFO: stdout: "update-demo-nautilus-qgszz update-demo-nautilus-tgw67 " May 13 22:03:10.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:03:10.668: INFO: stderr: "" May 13 22:03:10.668: INFO: stdout: "true" May 13 22:03:10.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-qgszz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:03:10.832: INFO: stderr: "" May 13 22:03:10.832: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:03:10.832: INFO: validating pod update-demo-nautilus-qgszz May 13 22:03:10.836: INFO: got data: { "image": "nautilus.jpg" } May 13 22:03:10.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:03:10.836: INFO: update-demo-nautilus-qgszz is verified up and running May 13 22:03:10.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-tgw67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 13 22:03:11.015: INFO: stderr: "" May 13 22:03:11.015: INFO: stdout: "true" May 13 22:03:11.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods update-demo-nautilus-tgw67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 13 22:03:11.199: INFO: stderr: "" May 13 22:03:11.199: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 13 22:03:11.199: INFO: validating pod update-demo-nautilus-tgw67 May 13 22:03:11.202: INFO: got data: { "image": "nautilus.jpg" } May 13 22:03:11.202: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 13 22:03:11.202: INFO: update-demo-nautilus-tgw67 is verified up and running STEP: using delete to clean up resources May 13 22:03:11.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 delete --grace-period=0 --force -f -' May 13 22:03:11.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 13 22:03:11.356: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 13 22:03:11.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get rc,svc -l name=update-demo --no-headers' May 13 22:03:11.575: INFO: stderr: "No resources found in kubectl-6188 namespace.\n" May 13 22:03:11.575: INFO: stdout: "" May 13 22:03:11.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 13 22:03:11.759: INFO: stderr: "" May 13 22:03:11.759: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6188" for this suite. • [SLOW TEST:30.276 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":17,"skipped":199,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:07.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:03:07.670: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:03:09.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076187, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076187, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:03:12.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:12.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:20.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-210" for this suite. STEP: Destroying namespace "webhook-210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.189 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":16,"skipped":272,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:02.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:06.463: INFO: Deleting pod "var-expansion-13f8223c-d36a-4779-b926-d07fac8f643b" in namespace "var-expansion-2760" May 13 22:03:06.468: INFO: Wait up to 5m0s for pod "var-expansion-13f8223c-d36a-4779-b926-d07fac8f643b" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:22.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2760" for this suite. • [SLOW TEST:20.061 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:22.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 13 22:03:22.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2052 071cd32f-ac58-4868-94c4-7e6c5a6a0d7f 40723 0 2022-05-13 22:03:22 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:03:22.559: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2052 071cd32f-ac58-4868-94c4-7e6c5a6a0d7f 40724 0 2022-05-13 22:03:22 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:22.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2052" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:20.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command May 13 22:03:20.364: INFO: Waiting up to 5m0s for pod "var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a" in namespace "var-expansion-6961" to be "Succeeded or Failed" May 13 22:03:20.367: INFO: Pod "var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611018ms May 13 22:03:22.370: INFO: Pod "var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006285361s May 13 22:03:24.374: INFO: Pod "var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009746997s STEP: Saw pod success May 13 22:03:24.374: INFO: Pod "var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a" satisfied condition "Succeeded or Failed" May 13 22:03:24.376: INFO: Trying to get logs from node node1 pod var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a container dapi-container: STEP: delete the pod May 13 22:03:24.390: INFO: Waiting for pod var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a to disappear May 13 22:03:24.393: INFO: Pod var-expansion-5d0f6a4d-7d59-4a92-abed-7905a5652c7a no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:24.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6961" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":273,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:24.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:03:24.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127" in namespace "projected-2484" to be "Succeeded or Failed" May 13 22:03:24.462: INFO: Pod "downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26907ms May 13 22:03:26.465: INFO: Pod "downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005424377s May 13 22:03:28.469: INFO: Pod "downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009111364s STEP: Saw pod success May 13 22:03:28.469: INFO: Pod "downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127" satisfied condition "Succeeded or Failed" May 13 22:03:28.472: INFO: Trying to get logs from node node2 pod downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127 container client-container: STEP: delete the pod May 13 22:03:28.486: INFO: Waiting for pod downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127 to disappear May 13 22:03:28.488: INFO: Pod downwardapi-volume-a77266c7-6002-4b84-9b60-68b06c783127 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:28.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2484" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":281,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:28.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 13 22:03:28.571: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7518 007924a4-c010-4c24-bcf4-dc53bf8781c7 40873 0 2022-05-13 22:03:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:03:28.572: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7518 007924a4-c010-4c24-bcf4-dc53bf8781c7 40874 0 2022-05-13 22:03:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 13 22:03:28.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7518 007924a4-c010-4c24-bcf4-dc53bf8781c7 40875 0 2022-05-13 22:03:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:03:28.581: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7518 007924a4-c010-4c24-bcf4-dc53bf8781c7 40876 0 2022-05-13 22:03:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-13 22:03:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:28.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7518" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":19,"skipped":306,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:22.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:03:23.113: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:03:25.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:03:27.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:03:30.133: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:30.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8972" for this suite. STEP: Destroying namespace "webhook-8972-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.592 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":7,"skipped":111,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:01.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 13 22:03:01.762: INFO: >>> kubeConfig: /root/.kube/config May 13 22:03:10.376: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:30.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9970" for this suite. • [SLOW TEST:28.986 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":25,"skipped":465,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:30.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-94f63ad9-22d9-4f73-ae28-aad9c63b0e95 STEP: Creating a pod to test consume secrets May 13 22:03:30.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8" in namespace "projected-8487" to be "Succeeded or Failed" May 13 22:03:30.784: INFO: Pod "pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300902ms May 13 22:03:32.788: INFO: Pod "pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00623853s May 13 22:03:34.793: INFO: Pod "pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011114948s STEP: Saw pod success May 13 22:03:34.793: INFO: Pod "pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8" satisfied condition "Succeeded or Failed" May 13 22:03:34.795: INFO: Trying to get logs from node node2 pod pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8 container projected-secret-volume-test: STEP: delete the pod May 13 22:03:34.812: INFO: Waiting for pod pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8 to disappear May 13 22:03:34.814: INFO: Pod pod-projected-secrets-3a5f1550-e827-40da-989c-4d6a11e3f7c8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:34.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8487" for this suite. • ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:30.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-5287a897-f4fa-4149-9b81-af1984510b66 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:36.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4251" for this suite. • [SLOW TEST:6.067 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":472,"failed":0} [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:34.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-698/secret-test-fb4873e1-b721-4246-bc52-bf54e305a59d STEP: Creating a pod to test consume secrets May 13 22:03:34.865: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112" in namespace "secrets-698" to be "Succeeded or Failed" May 13 22:03:34.867: INFO: Pod "pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288233ms May 13 22:03:36.871: INFO: Pod "pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005694359s May 13 22:03:38.876: INFO: Pod "pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011027442s STEP: Saw pod success May 13 22:03:38.876: INFO: Pod "pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112" satisfied condition "Succeeded or Failed" May 13 22:03:38.879: INFO: Trying to get logs from node node2 pod pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112 container env-test: STEP: delete the pod May 13 22:03:38.892: INFO: Waiting for pod pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112 to disappear May 13 22:03:38.894: INFO: Pod pod-configmaps-dbfb3426-5e21-4efd-988c-f7388d2a0112 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:38.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-698" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":472,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:38.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:38.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3027" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":120,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:36.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:36.328: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 13 22:03:36.345: INFO: The status of Pod pod-exec-websocket-26f79574-1ac1-4e45-a9db-039709e64c2b is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:38.349: INFO: The status of Pod pod-exec-websocket-26f79574-1ac1-4e45-a9db-039709e64c2b is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:40.349: INFO: The status of Pod pod-exec-websocket-26f79574-1ac1-4e45-a9db-039709e64c2b is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:40.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9765" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":120,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:40.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:40.489: INFO: Creating ReplicaSet my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2 May 13 22:03:40.495: INFO: Pod name my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2: Found 0 pods out of 1 May 13 22:03:45.499: INFO: Pod name my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2: Found 1 pods out of 1 May 13 22:03:45.499: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2" is running May 13 22:03:45.501: INFO: Pod "my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2-cpnn8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:03:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:03:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:03:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:03:40 +0000 UTC Reason: Message:}]) May 13 22:03:45.502: INFO: Trying to dial the pod May 13 22:03:50.511: INFO: Controller my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2: Got expected result from replica 1 [my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2-cpnn8]: "my-hostname-basic-ee9a358d-e9b5-4637-ba5e-fe2b45d7e7f2-cpnn8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:50.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3623" for this suite. • [SLOW TEST:10.054 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":10,"skipped":123,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:10.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod May 13 22:03:10.942: INFO: Successfully updated pod "var-expansion-7e9efce4-aba5-40a7-a7e0-e870afc3c596" STEP: waiting for pod running STEP: deleting the pod gracefully May 13 22:03:12.948: INFO: Deleting pod "var-expansion-7e9efce4-aba5-40a7-a7e0-e870afc3c596" in namespace "var-expansion-9513" May 13 22:03:12.952: INFO: Wait up to 5m0s for pod "var-expansion-7e9efce4-aba5-40a7-a7e0-e870afc3c596" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:52.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9513" for this suite. • [SLOW TEST:162.577 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":23,"skipped":518,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:50.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 13 22:03:50.570: INFO: Waiting up to 5m0s for pod "pod-be748769-8f4b-4c5d-be73-c34aff133734" in namespace "emptydir-8388" to be "Succeeded or Failed" May 13 22:03:50.576: INFO: Pod "pod-be748769-8f4b-4c5d-be73-c34aff133734": Phase="Pending", Reason="", readiness=false. Elapsed: 5.714197ms May 13 22:03:52.580: INFO: Pod "pod-be748769-8f4b-4c5d-be73-c34aff133734": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009562296s May 13 22:03:54.585: INFO: Pod "pod-be748769-8f4b-4c5d-be73-c34aff133734": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014377283s STEP: Saw pod success May 13 22:03:54.585: INFO: Pod "pod-be748769-8f4b-4c5d-be73-c34aff133734" satisfied condition "Succeeded or Failed" May 13 22:03:54.587: INFO: Trying to get logs from node node1 pod pod-be748769-8f4b-4c5d-be73-c34aff133734 container test-container: STEP: delete the pod May 13 22:03:54.600: INFO: Waiting for pod pod-be748769-8f4b-4c5d-be73-c34aff133734 to disappear May 13 22:03:54.602: INFO: Pod pod-be748769-8f4b-4c5d-be73-c34aff133734 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:54.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8388" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:28.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-7948 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 22:03:28.626: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 22:03:28.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:30.661: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:32.663: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:03:34.663: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:36.664: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:38.662: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:40.663: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:42.661: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:44.662: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:46.663: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:48.664: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:03:50.663: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 22:03:50.667: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 22:03:54.703: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 13 22:03:54.703: INFO: Going to poll 10.244.3.217 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 13 22:03:54.705: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.217:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7948 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:03:54.705: INFO: >>> kubeConfig: /root/.kube/config May 13 22:03:54.809: INFO: Found all 1 expected endpoints: [netserver-0] May 13 22:03:54.810: INFO: Going to poll 10.244.4.73 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 13 22:03:54.812: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.73:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7948 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:03:54.812: INFO: >>> kubeConfig: /root/.kube/config May 13 22:03:54.907: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:54.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7948" for this suite. • [SLOW TEST:26.311 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":314,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:52.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:03:53.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9" in namespace "projected-4478" to be "Succeeded or Failed" May 13 22:03:53.025: INFO: Pod "downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28036ms May 13 22:03:55.028: INFO: Pod "downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005478203s May 13 22:03:57.033: INFO: Pod "downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010524809s STEP: Saw pod success May 13 22:03:57.034: INFO: Pod "downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9" satisfied condition "Succeeded or Failed" May 13 22:03:57.036: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9 container client-container: STEP: delete the pod May 13 22:03:57.051: INFO: Waiting for pod downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9 to disappear May 13 22:03:57.053: INFO: Pod downwardapi-volume-7c317b40-77f3-4001-889b-6995250ac9a9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:57.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4478" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":526,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:57.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6407" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":25,"skipped":527,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:54.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 13 22:03:54.975: INFO: Waiting up to 5m0s for pod "pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6" in namespace "emptydir-1283" to be "Succeeded or Failed" May 13 22:03:54.979: INFO: Pod "pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303809ms May 13 22:03:56.982: INFO: Pod "pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006875436s May 13 22:03:58.987: INFO: Pod "pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011943323s STEP: Saw pod success May 13 22:03:58.987: INFO: Pod "pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6" satisfied condition "Succeeded or Failed" May 13 22:03:58.990: INFO: Trying to get logs from node node1 pod pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6 container test-container: STEP: delete the pod May 13 22:03:59.003: INFO: Waiting for pod pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6 to disappear May 13 22:03:59.004: INFO: Pod pod-e4aa04c9-7d66-43b6-af34-f9a82a4348b6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:59.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1283" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":327,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:59.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:03:59.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4963" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":22,"skipped":383,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":28,"skipped":481,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:38.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-378ef6df-cf3f-451b-b981-cb2469978627 in namespace container-probe-4503 May 13 22:03:43.023: INFO: Started pod liveness-378ef6df-cf3f-451b-b981-cb2469978627 in namespace container-probe-4503 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:03:43.025: INFO: Initial restart count of pod liveness-378ef6df-cf3f-451b-b981-cb2469978627 is 0 May 13 22:04:01.071: INFO: Restart count of pod container-probe-4503/liveness-378ef6df-cf3f-451b-b981-cb2469978627 is now 1 (18.045732243s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:01.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4503" for this suite. • [SLOW TEST:22.113 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:39.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5943 STEP: creating service affinity-nodeport in namespace services-5943 STEP: creating replication controller affinity-nodeport in namespace services-5943 I0513 22:01:39.450467 30 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5943, replica count: 3 I0513 22:01:42.501957 30 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:01:45.503468 30 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:01:48.504662 30 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:01:48.513: INFO: Creating new exec pod May 13 22:01:53.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' May 13 22:01:53.805: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 13 22:01:53.805: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:01:53.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.62 80' May 13 22:01:54.051: INFO: stderr: "+ nc -v -t -w 2 10.233.30.62 80\nConnection to 10.233.30.62 80 port [tcp/http] succeeded!\n+ echo hostName\n" May 13 22:01:54.052: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:01:54.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:54.306: INFO: rc: 1 May 13 22:01:54.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:01:55.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:55.564: INFO: rc: 1 May 13 22:01:55.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:01:56.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:56.545: INFO: rc: 1 May 13 22:01:56.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:01:57.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:57.558: INFO: rc: 1 May 13 22:01:57.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:01:58.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:58.522: INFO: rc: 1 May 13 22:01:58.522: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:01:59.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:01:59.540: INFO: rc: 1 May 13 22:01:59.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:00.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:00.535: INFO: rc: 1 May 13 22:02:00.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:01.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:01.528: INFO: rc: 1 May 13 22:02:01.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:02.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:02.548: INFO: rc: 1 May 13 22:02:02.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:03.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:03.544: INFO: rc: 1 May 13 22:02:03.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:04.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:04.537: INFO: rc: 1 May 13 22:02:04.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:05.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:05.665: INFO: rc: 1 May 13 22:02:05.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:06.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:06.867: INFO: rc: 1 May 13 22:02:06.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:07.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:07.583: INFO: rc: 1 May 13 22:02:07.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:08.624: INFO: rc: 1 May 13 22:02:08.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:09.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:09.711: INFO: rc: 1 May 13 22:02:09.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:10.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:10.585: INFO: rc: 1 May 13 22:02:10.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:11.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:11.577: INFO: rc: 1 May 13 22:02:11.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 + echo hostName nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:12.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:12.532: INFO: rc: 1 May 13 22:02:12.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:13.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:14.077: INFO: rc: 1 May 13 22:02:14.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:14.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:14.584: INFO: rc: 1 May 13 22:02:14.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:15.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:15.583: INFO: rc: 1 May 13 22:02:15.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:16.592: INFO: rc: 1 May 13 22:02:16.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:17.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:18.292: INFO: rc: 1 May 13 22:02:18.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:18.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:18.567: INFO: rc: 1 May 13 22:02:18.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:19.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:19.584: INFO: rc: 1 May 13 22:02:19.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:20.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:25.396: INFO: rc: 1 May 13 22:02:25.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:26.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:26.567: INFO: rc: 1 May 13 22:02:26.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:27.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:27.563: INFO: rc: 1 May 13 22:02:27.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:28.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:28.568: INFO: rc: 1 May 13 22:02:28.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:29.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:30.008: INFO: rc: 1 May 13 22:02:30.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:30.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:30.587: INFO: rc: 1 May 13 22:02:30.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:31.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:31.545: INFO: rc: 1 May 13 22:02:31.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:32.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:32.555: INFO: rc: 1 May 13 22:02:32.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:33.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:33.602: INFO: rc: 1 May 13 22:02:33.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:34.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:34.571: INFO: rc: 1 May 13 22:02:34.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:35.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:35.578: INFO: rc: 1 May 13 22:02:35.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:36.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:36.570: INFO: rc: 1 May 13 22:02:36.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:37.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:37.559: INFO: rc: 1 May 13 22:02:37.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:38.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:38.545: INFO: rc: 1 May 13 22:02:38.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:39.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:39.584: INFO: rc: 1 May 13 22:02:39.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:40.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:40.581: INFO: rc: 1 May 13 22:02:40.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:41.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:41.561: INFO: rc: 1 May 13 22:02:41.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:42.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:42.599: INFO: rc: 1 May 13 22:02:42.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:43.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:44.125: INFO: rc: 1 May 13 22:02:44.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:44.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:44.775: INFO: rc: 1 May 13 22:02:44.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:45.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:45.842: INFO: rc: 1 May 13 22:02:45.843: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:46.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:46.833: INFO: rc: 1 May 13 22:02:46.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:47.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:47.667: INFO: rc: 1 May 13 22:02:47.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:48.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:48.794: INFO: rc: 1 May 13 22:02:48.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:49.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:49.676: INFO: rc: 1 May 13 22:02:49.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:50.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:50.707: INFO: rc: 1 May 13 22:02:50.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:51.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:51.549: INFO: rc: 1 May 13 22:02:51.549: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:52.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:52.561: INFO: rc: 1 May 13 22:02:52.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:53.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:53.774: INFO: rc: 1 May 13 22:02:53.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:54.548: INFO: rc: 1 May 13 22:02:54.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:55.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:55.565: INFO: rc: 1 May 13 22:02:55.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:56.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:56.626: INFO: rc: 1 May 13 22:02:56.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:57.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:57.704: INFO: rc: 1 May 13 22:02:57.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:58.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:59.244: INFO: rc: 1 May 13 22:02:59.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:59.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:02:59.740: INFO: rc: 1 May 13 22:02:59.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:00.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:00.641: INFO: rc: 1 May 13 22:03:00.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:01.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:01.690: INFO: rc: 1 May 13 22:03:01.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:02.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:02.560: INFO: rc: 1 May 13 22:03:02.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:03.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:03.556: INFO: rc: 1 May 13 22:03:03.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:04.816: INFO: rc: 1 May 13 22:03:04.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:05.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:05.551: INFO: rc: 1 May 13 22:03:05.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:06.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:06.556: INFO: rc: 1 May 13 22:03:06.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:07.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:07.548: INFO: rc: 1 May 13 22:03:07.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:08.543: INFO: rc: 1 May 13 22:03:08.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:09.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:09.573: INFO: rc: 1 May 13 22:03:09.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:10.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:10.532: INFO: rc: 1 May 13 22:03:10.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:11.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:11.533: INFO: rc: 1 May 13 22:03:11.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:12.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:12.539: INFO: rc: 1 May 13 22:03:12.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:13.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:14.161: INFO: rc: 1 May 13 22:03:14.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 + echo hostName nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:14.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:14.560: INFO: rc: 1 May 13 22:03:14.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:15.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:15.554: INFO: rc: 1 May 13 22:03:15.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:16.568: INFO: rc: 1 May 13 22:03:16.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:17.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:17.557: INFO: rc: 1 May 13 22:03:17.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:18.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:18.576: INFO: rc: 1 May 13 22:03:18.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:19.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:19.579: INFO: rc: 1 May 13 22:03:19.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:20.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:20.676: INFO: rc: 1 May 13 22:03:20.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:21.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:21.579: INFO: rc: 1 May 13 22:03:21.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:22.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:22.549: INFO: rc: 1 May 13 22:03:22.549: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:23.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:23.543: INFO: rc: 1 May 13 22:03:23.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:24.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:24.565: INFO: rc: 1 May 13 22:03:24.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:25.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:25.564: INFO: rc: 1 May 13 22:03:25.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:26.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:26.835: INFO: rc: 1 May 13 22:03:26.835: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:27.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:28.257: INFO: rc: 1 May 13 22:03:28.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:28.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:28.528: INFO: rc: 1 May 13 22:03:28.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:29.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:29.568: INFO: rc: 1 May 13 22:03:29.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:30.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:30.540: INFO: rc: 1 May 13 22:03:30.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:31.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:31.999: INFO: rc: 1 May 13 22:03:31.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:32.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:32.560: INFO: rc: 1 May 13 22:03:32.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:33.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:33.625: INFO: rc: 1 May 13 22:03:33.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:34.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:34.555: INFO: rc: 1 May 13 22:03:34.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:35.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:35.562: INFO: rc: 1 May 13 22:03:35.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:36.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:36.587: INFO: rc: 1 May 13 22:03:36.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:37.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:37.561: INFO: rc: 1 May 13 22:03:37.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:38.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:38.554: INFO: rc: 1 May 13 22:03:38.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:39.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:39.707: INFO: rc: 1 May 13 22:03:39.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:40.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:40.598: INFO: rc: 1 May 13 22:03:40.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:41.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:41.566: INFO: rc: 1 May 13 22:03:41.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:42.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:42.728: INFO: rc: 1 May 13 22:03:42.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:43.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:44.132: INFO: rc: 1 May 13 22:03:44.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:44.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:44.554: INFO: rc: 1 May 13 22:03:44.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:45.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:45.554: INFO: rc: 1 May 13 22:03:45.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:46.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:46.555: INFO: rc: 1 May 13 22:03:46.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:47.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:47.702: INFO: rc: 1 May 13 22:03:47.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:48.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:48.562: INFO: rc: 1 May 13 22:03:48.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:49.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:49.544: INFO: rc: 1 May 13 22:03:49.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:50.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:50.544: INFO: rc: 1 May 13 22:03:50.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:51.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:51.751: INFO: rc: 1 May 13 22:03:51.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:52.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:52.614: INFO: rc: 1 May 13 22:03:52.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:53.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:53.605: INFO: rc: 1 May 13 22:03:53.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:54.566: INFO: rc: 1 May 13 22:03:54.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:54.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032' May 13 22:03:54.814: INFO: rc: 1 May 13 22:03:54.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5943 exec execpod-affinityw5lmz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32032: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32032 nc: connect to 10.10.190.207 port 32032 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:54.815: FAIL: Unexpected error: <*errors.errorString | 0xc0012fe7c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32032 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32032 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001427e40, 0x77b33d8, 0xc000af6b00, 0xc000ba2780, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001902180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001902180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001902180, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 13 22:03:54.816: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5943, will wait for the garbage collector to delete the pods May 13 22:03:54.881: INFO: Deleting ReplicationController affinity-nodeport took: 4.281633ms May 13 22:03:54.982: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.595219ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5943". STEP: Found 27 events. May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-dsxxk May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-lt5bs May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-rfbzb May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport-dsxxk: {default-scheduler } Scheduled: Successfully assigned services-5943/affinity-nodeport-dsxxk to node1 May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport-lt5bs: {default-scheduler } Scheduled: Successfully assigned services-5943/affinity-nodeport-lt5bs to node2 May 13 22:04:02.512: INFO: At 2022-05-13 22:01:39 +0000 UTC - event for affinity-nodeport-rfbzb: {default-scheduler } Scheduled: Successfully assigned services-5943/affinity-nodeport-rfbzb to node2 May 13 22:04:02.512: INFO: At 2022-05-13 22:01:40 +0000 UTC - event for affinity-nodeport-dsxxk: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:04:02.512: INFO: At 2022-05-13 22:01:41 +0000 UTC - event for affinity-nodeport-dsxxk: {kubelet node1} Started: Started container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:41 +0000 UTC - event for affinity-nodeport-dsxxk: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 317.296129ms May 13 22:04:02.512: INFO: At 2022-05-13 22:01:41 +0000 UTC - event for affinity-nodeport-dsxxk: {kubelet node1} Created: Created container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-lt5bs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-lt5bs: {kubelet node2} Started: Started container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-lt5bs: {kubelet node2} Created: Created container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-lt5bs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 385.588755ms May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-rfbzb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 317.84414ms May 13 22:04:02.512: INFO: At 2022-05-13 22:01:42 +0000 UTC - event for affinity-nodeport-rfbzb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:04:02.512: INFO: At 2022-05-13 22:01:43 +0000 UTC - event for affinity-nodeport-rfbzb: {kubelet node2} Started: Started container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:43 +0000 UTC - event for affinity-nodeport-rfbzb: {kubelet node2} Created: Created container affinity-nodeport May 13 22:04:02.512: INFO: At 2022-05-13 22:01:48 +0000 UTC - event for execpod-affinityw5lmz: {default-scheduler } Scheduled: Successfully assigned services-5943/execpod-affinityw5lmz to node1 May 13 22:04:02.512: INFO: At 2022-05-13 22:01:50 +0000 UTC - event for execpod-affinityw5lmz: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.80092ms May 13 22:04:02.512: INFO: At 2022-05-13 22:01:50 +0000 UTC - event for execpod-affinityw5lmz: {kubelet node1} Started: Started container agnhost-container May 13 22:04:02.512: INFO: At 2022-05-13 22:01:50 +0000 UTC - event for execpod-affinityw5lmz: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:04:02.512: INFO: At 2022-05-13 22:01:50 +0000 UTC - event for execpod-affinityw5lmz: {kubelet node1} Created: Created container agnhost-container May 13 22:04:02.513: INFO: At 2022-05-13 22:03:54 +0000 UTC - event for affinity-nodeport-dsxxk: {kubelet node1} Killing: Stopping container affinity-nodeport May 13 22:04:02.513: INFO: At 2022-05-13 22:03:54 +0000 UTC - event for affinity-nodeport-lt5bs: {kubelet node2} Killing: Stopping container affinity-nodeport May 13 22:04:02.513: INFO: At 2022-05-13 22:03:54 +0000 UTC - event for affinity-nodeport-rfbzb: {kubelet node2} Killing: Stopping container affinity-nodeport May 13 22:04:02.513: INFO: At 2022-05-13 22:03:54 +0000 UTC - event for execpod-affinityw5lmz: {kubelet node1} Killing: Stopping container agnhost-container May 13 22:04:02.515: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:04:02.515: INFO: May 13 22:04:02.518: INFO: Logging node info for node master1 May 13 22:04:02.520: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 41317 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:04:02.521: INFO: Logging kubelet events for node master1 May 13 22:04:02.523: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:04:02.548: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.548: INFO: Container docker-registry ready: true, restart count 0 May 13 22:04:02.548: INFO: Container nginx ready: true, restart count 0 May 13 22:04:02.548: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:04:02.548: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:04:02.548: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:04:02.548: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:04:02.548: INFO: Init container install-cni ready: true, restart count 2 May 13 22:04:02.548: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:04:02.548: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-multus ready: true, restart count 1 May 13 22:04:02.548: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:04:02.548: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.548: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:04:02.548: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.548: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:02.548: INFO: Container node-exporter ready: true, restart count 0 May 13 22:04:02.632: INFO: Latency metrics for node master1 May 13 22:04:02.632: INFO: Logging node info for node master2 May 13 22:04:02.635: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 41312 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:04:02.635: INFO: Logging kubelet events for node master2 May 13 22:04:02.637: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:04:02.650: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:04:02.650: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:04:02.650: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:02.650: INFO: Container node-exporter ready: true, restart count 0 May 13 22:04:02.650: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:04:02.650: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:04:02.650: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:04:02.650: INFO: Init container install-cni ready: true, restart count 2 May 13 22:04:02.650: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:04:02.650: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container kube-multus ready: true, restart count 1 May 13 22:04:02.650: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.650: INFO: Container coredns ready: true, restart count 1 May 13 22:04:02.734: INFO: Latency metrics for node master2 May 13 22:04:02.734: INFO: Logging node info for node master3 May 13 22:04:02.737: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 41575 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:04:01 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:04:02.737: INFO: Logging kubelet events for node master3 May 13 22:04:02.739: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:04:02.751: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:04:02.751: INFO: Init container install-cni ready: true, restart count 0 May 13 22:04:02.751: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:04:02.751: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container autoscaler ready: true, restart count 1 May 13 22:04:02.751: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:02.751: INFO: Container node-exporter ready: true, restart count 0 May 13 22:04:02.751: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:04:02.751: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:04:02.751: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:04:02.751: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:04:02.751: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container kube-multus ready: true, restart count 1 May 13 22:04:02.751: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.751: INFO: Container coredns ready: true, restart count 1 May 13 22:04:02.839: INFO: Latency metrics for node master3 May 13 22:04:02.839: INFO: Logging node info for node node1 May 13 22:04:02.857: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 41318 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:03:52 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:04:02.858: INFO: Logging kubelet events for node node1 May 13 22:04:02.860: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:04:02.915: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.915: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:04:02.915: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.915: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:02.915: INFO: Container node-exporter ready: true, restart count 0 May 13 22:04:02.915: INFO: ss2-2 started at 2022-05-13 22:03:46 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.915: INFO: Container webserver ready: true, restart count 0 May 13 22:04:02.915: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.915: INFO: Container kube-multus ready: true, restart count 1 May 13 22:04:02.916: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:04:02.916: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:04:02.916: INFO: Container discover ready: false, restart count 0 May 13 22:04:02.916: INFO: Container init ready: false, restart count 0 May 13 22:04:02.916: INFO: Container install ready: false, restart count 0 May 13 22:04:02.916: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:04:02.916: INFO: Container config-reloader ready: true, restart count 0 May 13 22:04:02.916: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:04:02.916: INFO: Container grafana ready: true, restart count 0 May 13 22:04:02.916: INFO: Container prometheus ready: true, restart count 1 May 13 22:04:02.916: INFO: forbid-27541321-6qs6l started at 2022-05-13 22:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container c ready: true, restart count 0 May 13 22:04:02.916: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:04:02.916: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:04:02.916: INFO: ss2-1 started at 2022-05-13 22:04:02 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container webserver ready: false, restart count 0 May 13 22:04:02.916: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:04:02.916: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:04:02.916: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:04:02.916: INFO: Container nodereport ready: true, restart count 0 May 13 22:04:02.916: INFO: Container reconcile ready: true, restart count 0 May 13 22:04:02.916: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:04:02.916: INFO: Init container install-cni ready: true, restart count 2 May 13 22:04:02.916: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:04:02.916: INFO: test-rs-kkljz started at 2022-05-13 22:03:57 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container httpd ready: true, restart count 0 May 13 22:04:02.916: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:04:02.916: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:04:02.916: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:04:02.916: INFO: Container collectd ready: true, restart count 0 May 13 22:04:02.916: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:04:02.916: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:04:03.315: INFO: Latency metrics for node node1 May 13 22:04:03.315: INFO: Logging node info for node node2 May 13 22:04:03.318: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 41443 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:03:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:03:57 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:04:03.318: INFO: Logging kubelet events for node node2 May 13 22:04:03.325: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:04:03.340: INFO: test-pod started at 2022-05-13 22:01:22 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container webserver ready: true, restart count 0 May 13 22:04:03.340: INFO: affinity-nodeport-timeout-bcd9t started at 2022-05-13 22:02:35 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 13 22:04:03.340: INFO: ss2-0 started at 2022-05-13 22:03:44 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container webserver ready: true, restart count 0 May 13 22:04:03.340: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:04:03.340: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:03.340: INFO: Container node-exporter ready: true, restart count 0 May 13 22:04:03.340: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container tas-extender ready: true, restart count 0 May 13 22:04:03.340: INFO: test-rs-p42q7 started at 2022-05-13 22:04:02 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container httpd ready: false, restart count 0 May 13 22:04:03.340: INFO: liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb started at 2022-05-13 22:03:54 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:04:03.340: INFO: affinity-nodeport-timeout-pl8nq started at 2022-05-13 22:02:35 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 13 22:04:03.340: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container kube-multus ready: true, restart count 1 May 13 22:04:03.340: INFO: execpod-affinity66wv7 started at 2022-05-13 22:02:41 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:04:03.340: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:04:03.340: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:04:03.340: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:04:03.340: INFO: Container collectd ready: true, restart count 0 May 13 22:04:03.340: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:04:03.340: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:04:03.340: INFO: pod-cc56578a-dc28-4ff1-865e-f413406b7708 started at 2022-05-13 22:04:01 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container test-container ready: false, restart count 0 May 13 22:04:03.340: INFO: test-rs-j5l27 started at 2022-05-13 22:04:02 +0000 UTC (0+2 container statuses recorded) May 13 22:04:03.340: INFO: Container httpd ready: false, restart count 0 May 13 22:04:03.340: INFO: Container test-rs ready: false, restart count 0 May 13 22:04:03.340: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:04:03.340: INFO: Init container install-cni ready: true, restart count 2 May 13 22:04:03.340: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:04:03.340: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:04:03.340: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:04:03.340: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:04:03.340: INFO: ss-0 started at 2022-05-13 22:03:59 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container webserver ready: false, restart count 0 May 13 22:04:03.340: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:04:03.340: INFO: Container nodereport ready: true, restart count 0 May 13 22:04:03.340: INFO: Container reconcile ready: true, restart count 0 May 13 22:04:03.340: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:04:03.340: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:04:03.340: INFO: Container discover ready: false, restart count 0 May 13 22:04:03.340: INFO: Container init ready: false, restart count 0 May 13 22:04:03.340: INFO: Container install ready: false, restart count 0 May 13 22:04:03.340: INFO: affinity-nodeport-timeout-sttmb started at 2022-05-13 22:02:35 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 13 22:04:03.340: INFO: pod-exec-websocket-26f79574-1ac1-4e45-a9db-039709e64c2b started at 2022-05-13 22:03:36 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container main ready: true, restart count 0 May 13 22:04:03.340: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:04:03.340: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:04:04.575: INFO: Latency metrics for node node2 May 13 22:04:04.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5943" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [145.168 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:54.815: Unexpected error: <*errors.errorString | 0xc0012fe7c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32032 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32032 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":191,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":481,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:01.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 13 22:04:01.125: INFO: Waiting up to 5m0s for pod "pod-cc56578a-dc28-4ff1-865e-f413406b7708" in namespace "emptydir-4043" to be "Succeeded or Failed" May 13 22:04:01.127: INFO: Pod "pod-cc56578a-dc28-4ff1-865e-f413406b7708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120897ms May 13 22:04:03.131: INFO: Pod "pod-cc56578a-dc28-4ff1-865e-f413406b7708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005611074s May 13 22:04:05.136: INFO: Pod "pod-cc56578a-dc28-4ff1-865e-f413406b7708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011279525s STEP: Saw pod success May 13 22:04:05.136: INFO: Pod "pod-cc56578a-dc28-4ff1-865e-f413406b7708" satisfied condition "Succeeded or Failed" May 13 22:04:05.139: INFO: Trying to get logs from node node2 pod pod-cc56578a-dc28-4ff1-865e-f413406b7708 container test-container: STEP: delete the pod May 13 22:04:05.150: INFO: Waiting for pod pod-cc56578a-dc28-4ff1-865e-f413406b7708 to disappear May 13 22:04:05.152: INFO: Pod pod-cc56578a-dc28-4ff1-865e-f413406b7708 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:05.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4043" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":481,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:57.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:57.198: INFO: Pod name sample-pod: Found 0 pods out of 1 May 13 22:04:02.206: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset May 13 22:04:02.216: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet May 13 22:04:02.221: INFO: observed ReplicaSet test-rs in namespace replicaset-8461 with ReadyReplicas 1, AvailableReplicas 1 May 13 22:04:02.233: INFO: observed ReplicaSet test-rs in namespace replicaset-8461 with ReadyReplicas 1, AvailableReplicas 1 May 13 22:04:02.242: INFO: observed ReplicaSet test-rs in namespace replicaset-8461 with ReadyReplicas 1, AvailableReplicas 1 May 13 22:04:02.245: INFO: observed ReplicaSet test-rs in namespace replicaset-8461 with ReadyReplicas 1, AvailableReplicas 1 May 13 22:04:06.113: INFO: observed ReplicaSet test-rs in namespace replicaset-8461 with ReadyReplicas 2, AvailableReplicas 2 May 13 22:04:07.311: INFO: observed Replicaset test-rs in namespace replicaset-8461 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:07.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8461" for this suite. • [SLOW TEST:10.150 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":26,"skipped":538,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:11.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:03:11.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 13 22:03:19.390: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:19Z]] name:name1 resourceVersion:40666 uid:5a61bb5b-7108-45e3-bf7c-a1486e9dae51] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 13 22:03:29.395: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:29Z]] name:name2 resourceVersion:40896 uid:275e722f-22cc-4810-8b26-90fff3b36375] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 13 22:03:39.400: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:39Z]] name:name1 resourceVersion:41119 uid:5a61bb5b-7108-45e3-bf7c-a1486e9dae51] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 13 22:03:49.406: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:49Z]] name:name2 resourceVersion:41265 uid:275e722f-22cc-4810-8b26-90fff3b36375] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 13 22:03:59.411: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:39Z]] name:name1 resourceVersion:41498 uid:5a61bb5b-7108-45e3-bf7c-a1486e9dae51] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 13 22:04:09.418: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-13T22:03:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-13T22:03:49Z]] name:name2 resourceVersion:41827 uid:275e722f-22cc-4810-8b26-90fff3b36375] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:19.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7731" for this suite. • [SLOW TEST:68.133 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":18,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:20.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:20.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6426" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":19,"skipped":275,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:20.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 13 22:04:20.133: INFO: Waiting up to 5m0s for pod "security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4" in namespace "security-context-5189" to be "Succeeded or Failed" May 13 22:04:20.135: INFO: Pod "security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079101ms May 13 22:04:22.139: INFO: Pod "security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005412779s May 13 22:04:24.141: INFO: Pod "security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007839633s STEP: Saw pod success May 13 22:04:24.141: INFO: Pod "security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4" satisfied condition "Succeeded or Failed" May 13 22:04:24.143: INFO: Trying to get logs from node node2 pod security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4 container test-container: STEP: delete the pod May 13 22:04:24.157: INFO: Waiting for pod security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4 to disappear May 13 22:04:24.159: INFO: Pod security-context-a8c59113-d60c-4c7e-9817-fa2aa10787e4 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:24.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5189" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":277,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:04.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 13 22:04:04.621: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:27.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8779" for this suite. • [SLOW TEST:23.093 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":10,"skipped":194,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:07.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:04:07.361: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 13 22:04:12.367: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 22:04:12.367: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 13 22:04:14.371: INFO: Creating deployment "test-rollover-deployment" May 13 22:04:14.377: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 13 22:04:16.382: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 13 22:04:16.387: INFO: Ensure that both replica sets have 1 created replica May 13 22:04:16.394: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 13 22:04:16.401: INFO: Updating deployment test-rollover-deployment May 13 22:04:16.401: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 13 22:04:18.407: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 13 22:04:18.412: INFO: Make sure deployment "test-rollover-deployment" is complete May 13 22:04:18.419: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:18.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076256, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:20.424: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:20.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:22.428: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:22.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:24.428: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:24.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:26.427: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:26.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:28.425: INFO: all replica sets need to contain the pod-template-hash label May 13 22:04:28.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076254, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:30.426: INFO: May 13 22:04:30.426: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 22:04:30.436: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2311 b7919514-94a2-44e1-a840-b39118ae418b 42296 2 2022-05-13 22:04:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-13 22:04:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bb4538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-13 22:04:14 +0000 UTC,LastTransitionTime:2022-05-13 22:04:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-05-13 22:04:30 +0000 UTC,LastTransitionTime:2022-05-13 22:04:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 13 22:04:30.440: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-2311 062eaff7-9030-4c53-8a8b-6903330d0985 42287 2 2022-05-13 22:04:16 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b7919514-94a2-44e1-a840-b39118ae418b 0xc000ec8620 0xc000ec8621}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7919514-94a2-44e1-a840-b39118ae418b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ec86a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 22:04:30.440: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 13 22:04:30.440: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2311 cf18896a-26a1-4bfb-8831-1ea4eb1d0797 42295 2 2022-05-13 22:04:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b7919514-94a2-44e1-a840-b39118ae418b 0xc000ec8377 0xc000ec8378}] [] [{e2e.test Update apps/v1 2022-05-13 22:04:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7919514-94a2-44e1-a840-b39118ae418b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000ec8438 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:04:30.440: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-2311 9e5c30b1-d194-44e5-999d-e3b32a115cf6 42020 2 2022-05-13 22:04:14 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b7919514-94a2-44e1-a840-b39118ae418b 0xc000ec84c7 0xc000ec84c8}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:04:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7919514-94a2-44e1-a840-b39118ae418b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ec8588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:04:30.444: INFO: Pod "test-rollover-deployment-98c5f4599-4zkbf" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-4zkbf test-rollover-deployment-98c5f4599- deployment-2311 e87e654e-d9db-4dfe-ba08-d0e9446d7553 42065 0 2022-05-13 22:04:16 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.230" ], "mac": "aa:71:4c:a0:05:15", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.230" ], "mac": "aa:71:4c:a0:05:15", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 062eaff7-9030-4c53-8a8b-6903330d0985 0xc000ec8baf 0xc000ec8bc0}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"062eaff7-9030-4c53-8a8b-6903330d0985\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.230\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nnz2v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.230,StartTime:2022-05-13 22:04:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://889b34b57ea9529c59255be0610d138dfc99ce802ebf40e7ee1de882db6e4781,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:30.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2311" for this suite. • [SLOW TEST:23.117 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":27,"skipped":543,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:05.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4130 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4130 STEP: creating replication controller externalsvc in namespace services-4130 I0513 22:04:05.213585 29 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4130, replica count: 2 I0513 22:04:08.264150 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:04:11.266018 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 13 22:04:11.282: INFO: Creating new exec pod May 13 22:04:15.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4130 exec execpodtqw6r -- /bin/sh -x -c nslookup nodeport-service.services-4130.svc.cluster.local' May 13 22:04:15.563: INFO: stderr: "+ nslookup nodeport-service.services-4130.svc.cluster.local\n" May 13 22:04:15.563: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-4130.svc.cluster.local\tcanonical name = externalsvc.services-4130.svc.cluster.local.\nName:\texternalsvc.services-4130.svc.cluster.local\nAddress: 10.233.51.240\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4130, will wait for the garbage collector to delete the pods May 13 22:04:15.621: INFO: Deleting ReplicationController externalsvc took: 4.6367ms May 13 22:04:15.722: INFO: Terminating ReplicationController externalsvc pods took: 100.95399ms May 13 22:04:32.434: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:32.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4130" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:27.279 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":31,"skipped":483,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:27.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics May 13 22:04:33.820: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 22:04:33.977: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:33.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2762" for this suite. • [SLOW TEST:6.233 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":11,"skipped":222,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:24.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:04:24.731: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:04:26.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076264, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076264, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076264, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076264, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:04:29.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:04:29.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1136-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:37.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-649" for this suite. STEP: Destroying namespace "webhook-649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.730 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":21,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:37.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events May 13 22:04:38.009: INFO: created test-event-1 May 13 22:04:38.013: INFO: created test-event-2 May 13 22:04:38.016: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 13 22:04:38.018: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 13 22:04:38.029: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6399" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":22,"skipped":319,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:38.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version May 13 22:04:38.096: INFO: Major version: 1 STEP: Confirm minor version May 13 22:04:38.096: INFO: cleanMinorVersion: 21 May 13 22:04:38.096: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:38.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-204" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":23,"skipped":333,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:32.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 13 22:04:32.518: INFO: Waiting up to 5m0s for pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8" in namespace "security-context-6952" to be "Succeeded or Failed" May 13 22:04:32.521: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.021176ms May 13 22:04:34.524: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006019267s May 13 22:04:36.528: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010463601s May 13 22:04:38.532: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013728591s May 13 22:04:40.536: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018155124s STEP: Saw pod success May 13 22:04:40.536: INFO: Pod "security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8" satisfied condition "Succeeded or Failed" May 13 22:04:40.538: INFO: Trying to get logs from node node2 pod security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8 container test-container: STEP: delete the pod May 13 22:04:40.552: INFO: Waiting for pod security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8 to disappear May 13 22:04:40.554: INFO: Pod security-context-3c776dbe-8174-4c47-ac09-4633163cf2c8 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:40.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6952" for this suite. • [SLOW TEST:8.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":498,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:38.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 22:04:44.193: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:44.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2697" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:40.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 13 22:04:46.622: INFO: &Pod{ObjectMeta:{send-events-4160b205-2906-4e0c-aaec-6893e660274d events-1760 d7987065-55d9-4038-9ded-9857262fee1a 42844 0 2022-05-13 22:04:40 +0000 UTC map[name:foo time:599938577] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "3e:5d:94:3f:55:1a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "3e:5d:94:3f:55:1a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-13 22:04:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bkxr9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkxr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.242,StartTime:2022-05-13 22:04:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://eabb90bec9ecd00286aeab60a5509dc9eee5644d9eb2f9394a062cc0637843fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 13 22:04:48.628: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 13 22:04:50.632: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:50.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1760" for this suite. • [SLOW TEST:10.072 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":33,"skipped":502,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:30.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:04:40.526: INFO: DNS probes using dns-test-1be99e55-626c-481a-8836-9307875d7428 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:04:46.567: INFO: DNS probes using dns-test-1b13f281-644e-4eb3-8e06-3f2f0ac205b9 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9643.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9643.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:04:52.610: INFO: DNS probes using dns-test-2fe0d3af-fbe6-4c03-8f65-448fdda52a46 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:52.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9643" for this suite. • [SLOW TEST:22.163 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":28,"skipped":548,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:52.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6700 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 13 22:02:52.903: INFO: Found 0 stateful pods, waiting for 3 May 13 22:03:02.906: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:02.906: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:02.906: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 13 22:03:12.907: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:12.907: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:12.907: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 13 22:03:12.930: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 13 22:03:22.957: INFO: Updating stateful set ss2 May 13 22:03:22.962: INFO: Waiting for Pod statefulset-6700/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted May 13 22:03:32.984: INFO: Found 1 stateful pods, waiting for 3 May 13 22:03:42.988: INFO: Found 2 stateful pods, waiting for 3 May 13 22:03:52.988: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:52.988: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:03:52.988: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 13 22:03:53.010: INFO: Updating stateful set ss2 May 13 22:03:53.014: INFO: Waiting for Pod statefulset-6700/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 13 22:04:03.040: INFO: Updating stateful set ss2 May 13 22:04:03.045: INFO: Waiting for StatefulSet statefulset-6700/ss2 to complete update May 13 22:04:03.045: INFO: Waiting for Pod statefulset-6700/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 13 22:04:13.051: INFO: Waiting for StatefulSet statefulset-6700/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:04:23.053: INFO: Deleting all statefulset in ns statefulset-6700 May 13 22:04:23.055: INFO: Scaling statefulset ss2 to 0 May 13 22:04:53.077: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:04:53.080: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:53.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6700" for this suite. • [SLOW TEST:120.223 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":26,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":347,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:44.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 13 22:04:44.634: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 13 22:04:46.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:04:48.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076284, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:04:51.654: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:04:51.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:59.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9512" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.563 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":25,"skipped":347,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:59.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:04:59.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9340" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":26,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:53.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container May 13 22:05:01.692: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8960 pod-service-account-0f2c0345-bd7b-4fb6-a562-bed447527b40 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 13 22:05:01.951: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8960 pod-service-account-0f2c0345-bd7b-4fb6-a562-bed447527b40 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 13 22:05:02.206: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8960 pod-service-account-0f2c0345-bd7b-4fb6-a562-bed447527b40 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:02.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8960" for this suite. • [SLOW TEST:9.331 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":27,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:02.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info May 13 22:05:02.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2222 cluster-info' May 13 22:05:02.729: INFO: stderr: "" May 13 22:05:02.729: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:02.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2222" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":28,"skipped":466,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:02.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics May 13 22:05:03.827: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 22:05:04.013: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:04.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3783" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":29,"skipped":473,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:52.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:04:52.661: INFO: Creating deployment "webserver-deployment" May 13 22:04:52.665: INFO: Waiting for observed generation 1 May 13 22:04:54.671: INFO: Waiting for all required pods to come up May 13 22:04:54.675: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 13 22:05:02.682: INFO: Waiting for deployment "webserver-deployment" to complete May 13 22:05:02.687: INFO: Updating deployment "webserver-deployment" with a non-existent image May 13 22:05:02.693: INFO: Updating deployment webserver-deployment May 13 22:05:02.693: INFO: Waiting for observed generation 2 May 13 22:05:04.699: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 13 22:05:04.703: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 13 22:05:04.705: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 13 22:05:04.712: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 13 22:05:04.712: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 13 22:05:04.714: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 13 22:05:04.718: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 13 22:05:04.718: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 13 22:05:04.725: INFO: Updating deployment webserver-deployment May 13 22:05:04.725: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 13 22:05:04.730: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 13 22:05:04.731: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 22:05:04.736: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1994 b169c1b5-2b36-4592-87d1-9fa58b4c42f0 43544 3 2022-05-13 22:04:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00525e8a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-13 22:05:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-05-13 22:05:02 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 13 22:05:04.739: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-1994 bb77c86d-7506-467a-94cf-a946345db5a8 43547 3 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b169c1b5-2b36-4592-87d1-9fa58b4c42f0 0xc000ca6977 0xc000ca6978}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b169c1b5-2b36-4592-87d1-9fa58b4c42f0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ca6a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:05:04.739: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 13 22:05:04.739: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-1994 755358fa-6e4a-48fe-9f1f-3659eb3f0758 43545 3 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b169c1b5-2b36-4592-87d1-9fa58b4c42f0 0xc000ca6ae7 0xc000ca6ae8}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b169c1b5-2b36-4592-87d1-9fa58b4c42f0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ca6bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 13 22:05:04.743: INFO: Pod "webserver-deployment-795d758f88-9bxth" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9bxth webserver-deployment-795d758f88- deployment-1994 37799d06-2491-4539-943b-822ddc3d7a25 43538 0 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525ec3f 0xc00525ec50}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-13 22:05:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v6559,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6559,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-13 22:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.743: INFO: Pod "webserver-deployment-795d758f88-9sngn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9sngn webserver-deployment-795d758f88- deployment-1994 2b1ea9d2-05a7-4cb4-87d3-630ae8c01065 43540 0 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.105" ], "mac": "a6:30:44:92:7a:e9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.105" ], "mac": "a6:30:44:92:7a:e9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525ee3f 0xc00525ee50}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:05:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j279p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j279p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.744: INFO: Pod "webserver-deployment-795d758f88-bs76z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bs76z webserver-deployment-795d758f88- deployment-1994 9b044058-1d57-4391-9edd-3167f820cbe9 43478 0 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525efcf 0xc00525efe0}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k75mj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k75mj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-13 22:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.744: INFO: Pod "webserver-deployment-795d758f88-n2l85" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n2l85 webserver-deployment-795d758f88- deployment-1994 4f98f7cf-977e-4a6e-8000-e71e92148e14 43472 0 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525f1af 0xc00525f1c0}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rtndm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rtndm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-13 22:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.744: INFO: Pod "webserver-deployment-795d758f88-n6jgh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n6jgh webserver-deployment-795d758f88- deployment-1994 6b69dddc-5f2d-4dbb-919e-636012a7df62 43551 0 2022-05-13 22:05:04 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525f38f 0xc00525f3a0}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v4hds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4hds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.745: INFO: Pod "webserver-deployment-795d758f88-sj2r6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sj2r6 webserver-deployment-795d758f88- deployment-1994 2ffe8f73-8db7-4031-bc6a-88d56317b546 43481 0 2022-05-13 22:05:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bb77c86d-7506-467a-94cf-a946345db5a8 0xc00525f4ff 0xc00525f510}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb77c86d-7506-467a-94cf-a946345db5a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-13 22:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8xnmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8xnmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-13 22:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.745: INFO: Pod "webserver-deployment-847dcfb7fb-8drsz" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8drsz webserver-deployment-847dcfb7fb- deployment-1994 49f63ec5-7ce7-4638-9c2d-2eb9867cc7c6 43383 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.249" ], "mac": "52:42:d0:f4:40:9b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.249" ], "mac": "52:42:d0:f4:40:9b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc00525f70f 0xc00525f720}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.249\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-khw92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khw92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.249,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:05:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://7c9f551a12b9f8246d1e3559b708b8ddc5d374d2b4c4a9c8da729786e38d439b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.745: INFO: Pod "webserver-deployment-847dcfb7fb-bfncl" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bfncl webserver-deployment-847dcfb7fb- deployment-1994 5d338405-ea8f-47ea-8fc8-ebc94aee8404 43334 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.103" ], "mac": "3e:2d:8d:72:19:06", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.103" ], "mac": "3e:2d:8d:72:19:06", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc00525f90f 0xc00525f920}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wml9m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wml9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.103,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://8c655c735194e20020f00afceea73d109a0f3f35149090da2e678328d96264c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.746: INFO: Pod "webserver-deployment-847dcfb7fb-hlqk4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hlqk4 webserver-deployment-847dcfb7fb- deployment-1994 31f390cd-7d50-4677-9d19-4070de2d0693 43380 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.246" ], "mac": "9a:ec:55:8b:37:0f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.246" ], "mac": "9a:ec:55:8b:37:0f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc00525fb0f 0xc00525fb20}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-99grd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99grd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.246,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d8edf211501057ea9a1a413b63081a25666e7f4f05eed1118822de727ee88d6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.746: INFO: Pod "webserver-deployment-847dcfb7fb-ktwv6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ktwv6 webserver-deployment-847dcfb7fb- deployment-1994 0c2cd553-9e96-451c-8188-85161a4c7f2c 43389 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.247" ], "mac": "0e:5e:28:31:47:fc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.247" ], "mac": "0e:5e:28:31:47:fc", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc00525fd0f 0xc00525fd20}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dfcnl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfcnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.247,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://76d2c14814f6b90b03437daea5f771b148acd00d774e245df78ab7f7dc209ec9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.746: INFO: Pod "webserver-deployment-847dcfb7fb-ncfgc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ncfgc webserver-deployment-847dcfb7fb- deployment-1994 a94920ac-de5f-4b7a-820d-0ad64749b5f4 43331 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.102" ], "mac": "92:ae:43:4d:01:6d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.102" ], "mac": "92:ae:43:4d:01:6d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc00525ff1f 0xc00525ff30}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2cgtj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cgtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.102,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b8900a31a678eb7bec7152c7ac0feb3f808647e5a41f7d0ccd285bd2fe92bdff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.747: INFO: Pod "webserver-deployment-847dcfb7fb-njx2f" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-njx2f webserver-deployment-847dcfb7fb- deployment-1994 a493d195-fbd1-4798-9d5d-e163bf5f614f 43341 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.100" ], "mac": "96:4d:b0:6c:1d:56", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.100" ], "mac": "96:4d:b0:6c:1d:56", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc0005a039f 0xc0005a03f0}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4ltqw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4ltqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.100,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://f2aaa6a5f9052efa482e0c3dec1b3671095d10ba6d2cbcd8ac9f814c05f0be70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.747: INFO: Pod "webserver-deployment-847dcfb7fb-qbt5b" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qbt5b webserver-deployment-847dcfb7fb- deployment-1994 0d828cca-b931-4035-997e-bcf5fa236928 43553 0 2022-05-13 22:05:04 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc0005a06af 0xc0005a0700}] [] [{kube-controller-manager Update v1 2022-05-13 22:05:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zd5hk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd5hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:05:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.747: INFO: Pod "webserver-deployment-847dcfb7fb-xppgj" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xppgj webserver-deployment-847dcfb7fb- deployment-1994 eed049fd-f1dd-493b-91a7-8f20f6c427b3 43277 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "8e:a8:d0:25:4c:0a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "8e:a8:d0:25:4c:0a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc0005a0acf 0xc0005a0af0}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qftsm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qftsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.101,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5d6fe0700e55cc7ea46944b9beb457f72f868517381107f27d6fe3b80a86d9e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:05:04.748: INFO: Pod "webserver-deployment-847dcfb7fb-zs6qj" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zs6qj webserver-deployment-847dcfb7fb- deployment-1994 798128fd-990c-4fcc-95e0-21173609e748 43307 0 2022-05-13 22:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.244" ], "mac": "42:f1:ca:1a:81:90", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.244" ], "mac": "42:f1:ca:1a:81:90", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 755358fa-6e4a-48fe-9f1f-3659eb3f0758 0xc0005a112f 0xc0005a13b0}] [] [{kube-controller-manager Update v1 2022-05-13 22:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755358fa-6e4a-48fe-9f1f-3659eb3f0758\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:04:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lnvdz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnvdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.244,StartTime:2022-05-13 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:04:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b21fb4efaa09ffa65bd8c65a9aeaadcb5671853ad2211d1e9e4d5c72b82afb19,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:04.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1994" for this suite. • [SLOW TEST:12.117 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":29,"skipped":550,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:02:30.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6770 May 13 22:02:31.009: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:33.012: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 22:02:35.013: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 13 22:02:35.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 13 22:02:35.340: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 13 22:02:35.340: INFO: stdout: "iptables" May 13 22:02:35.340: INFO: proxyMode: iptables May 13 22:02:35.347: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 13 22:02:35.349: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6770 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6770 I0513 22:02:35.361244 35 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6770, replica count: 3 I0513 22:02:38.413063 35 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:02:41.413873 35 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:02:41.421: INFO: Creating new exec pod May 13 22:02:50.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' May 13 22:02:50.686: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 13 22:02:50.686: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:02:50.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.92 80' May 13 22:02:50.920: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.22.92 80\nConnection to 10.233.22.92 80 port [tcp/http] succeeded!\n" May 13 22:02:50.920: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:02:50.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:51.156: INFO: rc: 1 May 13 22:02:51.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:52.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:52.395: INFO: rc: 1 May 13 22:02:52.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:53.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:53.517: INFO: rc: 1 May 13 22:02:53.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:54.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:54.456: INFO: rc: 1 May 13 22:02:54.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:55.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:55.402: INFO: rc: 1 May 13 22:02:55.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:56.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:56.414: INFO: rc: 1 May 13 22:02:56.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:57.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:57.398: INFO: rc: 1 May 13 22:02:57.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:58.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:58.469: INFO: rc: 1 May 13 22:02:58.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:02:59.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:02:59.395: INFO: rc: 1 May 13 22:02:59.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:00.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:00.416: INFO: rc: 1 May 13 22:03:00.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:01.398: INFO: rc: 1 May 13 22:03:01.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:02.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:02.830: INFO: rc: 1 May 13 22:03:02.830: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:03.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:03.418: INFO: rc: 1 May 13 22:03:03.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:04.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:04.393: INFO: rc: 1 May 13 22:03:04.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:05.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:05.395: INFO: rc: 1 May 13 22:03:05.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:06.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:06.432: INFO: rc: 1 May 13 22:03:06.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:07.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:07.409: INFO: rc: 1 May 13 22:03:07.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:08.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:08.416: INFO: rc: 1 May 13 22:03:08.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:09.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:10.458: INFO: rc: 1 May 13 22:03:10.458: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:11.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:11.422: INFO: rc: 1 May 13 22:03:11.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:12.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:12.411: INFO: rc: 1 May 13 22:03:12.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:13.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:13.385: INFO: rc: 1 May 13 22:03:13.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:14.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:14.414: INFO: rc: 1 May 13 22:03:14.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:15.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:15.408: INFO: rc: 1 May 13 22:03:15.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:16.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:16.387: INFO: rc: 1 May 13 22:03:16.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:17.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:17.403: INFO: rc: 1 May 13 22:03:17.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:18.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:18.409: INFO: rc: 1 May 13 22:03:18.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:19.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:19.431: INFO: rc: 1 May 13 22:03:19.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:20.415: INFO: rc: 1 May 13 22:03:20.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:21.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:21.416: INFO: rc: 1 May 13 22:03:21.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:22.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:22.416: INFO: rc: 1 May 13 22:03:22.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:23.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:23.558: INFO: rc: 1 May 13 22:03:23.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:24.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:24.832: INFO: rc: 1 May 13 22:03:24.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:25.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:25.470: INFO: rc: 1 May 13 22:03:25.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:26.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:26.390: INFO: rc: 1 May 13 22:03:26.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:27.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:27.456: INFO: rc: 1 May 13 22:03:27.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:28.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:28.387: INFO: rc: 1 May 13 22:03:28.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:29.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:29.406: INFO: rc: 1 May 13 22:03:29.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:30.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:30.385: INFO: rc: 1 May 13 22:03:30.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:31.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:31.529: INFO: rc: 1 May 13 22:03:31.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:32.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:32.415: INFO: rc: 1 May 13 22:03:32.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:33.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:33.490: INFO: rc: 1 May 13 22:03:33.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:34.451: INFO: rc: 1 May 13 22:03:34.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:35.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:35.547: INFO: rc: 1 May 13 22:03:35.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:36.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:36.427: INFO: rc: 1 May 13 22:03:36.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:37.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:37.484: INFO: rc: 1 May 13 22:03:37.484: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:38.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:38.425: INFO: rc: 1 May 13 22:03:38.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:39.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:40.075: INFO: rc: 1 May 13 22:03:40.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:40.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:40.437: INFO: rc: 1 May 13 22:03:40.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:41.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:41.444: INFO: rc: 1 May 13 22:03:41.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:42.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:42.432: INFO: rc: 1 May 13 22:03:42.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:43.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:43.456: INFO: rc: 1 May 13 22:03:43.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:44.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:44.453: INFO: rc: 1 May 13 22:03:44.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:45.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:45.472: INFO: rc: 1 May 13 22:03:45.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:46.415: INFO: rc: 1 May 13 22:03:46.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:47.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:47.398: INFO: rc: 1 May 13 22:03:47.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:48.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:48.401: INFO: rc: 1 May 13 22:03:48.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:49.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:49.421: INFO: rc: 1 May 13 22:03:49.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:50.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:50.415: INFO: rc: 1 May 13 22:03:50.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:51.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:51.452: INFO: rc: 1 May 13 22:03:51.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:52.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:52.422: INFO: rc: 1 May 13 22:03:52.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:53.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:53.437: INFO: rc: 1 May 13 22:03:53.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:54.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:54.600: INFO: rc: 1 May 13 22:03:54.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:55.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:55.414: INFO: rc: 1 May 13 22:03:55.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:56.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:56.604: INFO: rc: 1 May 13 22:03:56.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:57.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:57.512: INFO: rc: 1 May 13 22:03:57.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:58.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:58.428: INFO: rc: 1 May 13 22:03:58.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:03:59.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:03:59.419: INFO: rc: 1 May 13 22:03:59.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:00.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:00.456: INFO: rc: 1 May 13 22:04:00.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:01.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:01.596: INFO: rc: 1 May 13 22:04:01.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:02.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:02.429: INFO: rc: 1 May 13 22:04:02.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:03.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:03.419: INFO: rc: 1 May 13 22:04:03.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:04.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:04.480: INFO: rc: 1 May 13 22:04:04.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:05.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:05.424: INFO: rc: 1 May 13 22:04:05.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:06.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:06.454: INFO: rc: 1 May 13 22:04:06.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:07.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:07.421: INFO: rc: 1 May 13 22:04:07.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:08.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:08.759: INFO: rc: 1 May 13 22:04:08.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:09.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:09.876: INFO: rc: 1 May 13 22:04:09.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:10.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:10.464: INFO: rc: 1 May 13 22:04:10.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:11.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:11.397: INFO: rc: 1 May 13 22:04:11.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:12.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:12.889: INFO: rc: 1 May 13 22:04:12.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:13.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:13.482: INFO: rc: 1 May 13 22:04:13.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:14.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:14.426: INFO: rc: 1 May 13 22:04:14.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:15.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:15.409: INFO: rc: 1 May 13 22:04:15.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:16.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:16.440: INFO: rc: 1 May 13 22:04:16.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:17.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:17.375: INFO: rc: 1 May 13 22:04:17.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:18.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:18.400: INFO: rc: 1 May 13 22:04:18.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:19.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:19.391: INFO: rc: 1 May 13 22:04:19.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:20.428: INFO: rc: 1 May 13 22:04:20.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:21.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:21.682: INFO: rc: 1 May 13 22:04:21.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:22.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:22.500: INFO: rc: 1 May 13 22:04:22.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:23.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:23.413: INFO: rc: 1 May 13 22:04:23.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:24.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:24.398: INFO: rc: 1 May 13 22:04:24.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:25.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:25.392: INFO: rc: 1 May 13 22:04:25.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:26.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:26.410: INFO: rc: 1 May 13 22:04:26.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:27.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:27.415: INFO: rc: 1 May 13 22:04:27.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:28.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:28.416: INFO: rc: 1 May 13 22:04:28.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:29.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:30.190: INFO: rc: 1 May 13 22:04:30.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:31.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:31.677: INFO: rc: 1 May 13 22:04:31.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:32.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:33.083: INFO: rc: 1 May 13 22:04:33.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:33.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:33.620: INFO: rc: 1 May 13 22:04:33.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:34.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:34.863: INFO: rc: 1 May 13 22:04:34.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:35.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:35.585: INFO: rc: 1 May 13 22:04:35.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:36.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:36.741: INFO: rc: 1 May 13 22:04:36.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:37.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:37.587: INFO: rc: 1 May 13 22:04:37.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:38.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:38.536: INFO: rc: 1 May 13 22:04:38.536: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:39.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:39.457: INFO: rc: 1 May 13 22:04:39.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:40.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:40.436: INFO: rc: 1 May 13 22:04:40.436: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:41.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:41.389: INFO: rc: 1 May 13 22:04:41.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:42.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:42.611: INFO: rc: 1 May 13 22:04:42.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:43.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:43.417: INFO: rc: 1 May 13 22:04:43.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30991 + echo hostName nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:44.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:44.402: INFO: rc: 1 May 13 22:04:44.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:45.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:45.510: INFO: rc: 1 May 13 22:04:45.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:46.613: INFO: rc: 1 May 13 22:04:46.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:47.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:47.425: INFO: rc: 1 May 13 22:04:47.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:48.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:48.412: INFO: rc: 1 May 13 22:04:48.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:49.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:49.394: INFO: rc: 1 May 13 22:04:49.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:50.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:50.426: INFO: rc: 1 May 13 22:04:50.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:51.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:51.439: INFO: rc: 1 May 13 22:04:51.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:51.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991' May 13 22:04:52.079: INFO: rc: 1 May 13 22:04:52.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6770 exec execpod-affinity66wv7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30991: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30991 nc: connect to 10.10.190.207 port 30991 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:04:52.080: FAIL: Unexpected error: <*errors.errorString | 0xc000b87300>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30991 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30991 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc00078e6e0, 0x77b33d8, 0xc0032b1ce0, 0xc00388c280) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0010e5e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0010e5e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0010e5e00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 13 22:04:52.081: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6770, will wait for the garbage collector to delete the pods May 13 22:04:52.154: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.980852ms May 13 22:04:52.255: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.042292ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6770". STEP: Found 33 events. May 13 22:05:02.972: INFO: At 2022-05-13 22:02:31 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 390.533001ms May 13 22:05:02.972: INFO: At 2022-05-13 22:02:31 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:05:02.972: INFO: At 2022-05-13 22:02:31 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-6770/kube-proxy-mode-detector to node2 May 13 22:05:02.972: INFO: At 2022-05-13 22:02:32 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container May 13 22:05:02.972: INFO: At 2022-05-13 22:02:32 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-bcd9t May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-sttmb May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-pl8nq May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {default-scheduler } Scheduled: Successfully assigned services-6770/affinity-nodeport-timeout-bcd9t to node2 May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {default-scheduler } Scheduled: Successfully assigned services-6770/affinity-nodeport-timeout-pl8nq to node2 May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {default-scheduler } Scheduled: Successfully assigned services-6770/affinity-nodeport-timeout-sttmb to node2 May 13 22:05:02.972: INFO: At 2022-05-13 22:02:35 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 380.913066ms May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {kubelet node2} Started: Started container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {kubelet node2} Created: Created container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:38 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 917.550404ms May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {kubelet node2} Started: Started container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {kubelet node2} Created: Created container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {kubelet node2} Created: Created container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {kubelet node2} Started: Started container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:02:39 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 667.482416ms May 13 22:05:02.972: INFO: At 2022-05-13 22:02:41 +0000 UTC - event for execpod-affinity66wv7: {default-scheduler } Scheduled: Successfully assigned services-6770/execpod-affinity66wv7 to node2 May 13 22:05:02.972: INFO: At 2022-05-13 22:02:43 +0000 UTC - event for execpod-affinity66wv7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 393.716018ms May 13 22:05:02.972: INFO: At 2022-05-13 22:02:43 +0000 UTC - event for execpod-affinity66wv7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:05:02.972: INFO: At 2022-05-13 22:02:44 +0000 UTC - event for execpod-affinity66wv7: {kubelet node2} Created: Created container agnhost-container May 13 22:05:02.972: INFO: At 2022-05-13 22:02:45 +0000 UTC - event for execpod-affinity66wv7: {kubelet node2} Started: Started container agnhost-container May 13 22:05:02.972: INFO: At 2022-05-13 22:04:52 +0000 UTC - event for affinity-nodeport-timeout-bcd9t: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:04:52 +0000 UTC - event for affinity-nodeport-timeout-pl8nq: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:04:52 +0000 UTC - event for affinity-nodeport-timeout-sttmb: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 13 22:05:02.972: INFO: At 2022-05-13 22:04:52 +0000 UTC - event for execpod-affinity66wv7: {kubelet node2} Killing: Stopping container agnhost-container May 13 22:05:02.974: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:05:02.974: INFO: May 13 22:05:02.978: INFO: Logging node info for node master1 May 13 22:05:02.981: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 43067 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:53 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:53 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:53 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:04:53 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:05:02.981: INFO: Logging kubelet events for node master1 May 13 22:05:02.984: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:05:03.004: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:05:03.004: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:05:03.004: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:03.004: INFO: Container node-exporter ready: true, restart count 0 May 13 22:05:03.004: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:05:03.004: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:05:03.004: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:05:03.004: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:05:03.004: INFO: Init container install-cni ready: true, restart count 2 May 13 22:05:03.004: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:05:03.004: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.004: INFO: Container kube-multus ready: true, restart count 1 May 13 22:05:03.004: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.004: INFO: Container docker-registry ready: true, restart count 0 May 13 22:05:03.004: INFO: Container nginx ready: true, restart count 0 May 13 22:05:03.089: INFO: Latency metrics for node master1 May 13 22:05:03.089: INFO: Logging node info for node master2 May 13 22:05:03.132: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 43515 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:03 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:03 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:03 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:05:03 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:05:03.133: INFO: Logging kubelet events for node master2 May 13 22:05:03.136: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:05:03.145: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:05:03.146: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:05:03.146: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:05:03.146: INFO: Init container install-cni ready: true, restart count 2 May 13 22:05:03.146: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:05:03.146: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-multus ready: true, restart count 1 May 13 22:05:03.146: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container coredns ready: true, restart count 1 May 13 22:05:03.146: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:05:03.146: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:05:03.146: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.146: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:03.146: INFO: Container node-exporter ready: true, restart count 0 May 13 22:05:03.229: INFO: Latency metrics for node master2 May 13 22:05:03.229: INFO: Logging node info for node master3 May 13 22:05:03.233: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 43405 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:05:01 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:05:01 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:05:03.234: INFO: Logging kubelet events for node master3 May 13 22:05:03.235: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:05:03.245: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:05:03.245: INFO: Init container install-cni ready: true, restart count 0 May 13 22:05:03.245: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:05:03.245: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container autoscaler ready: true, restart count 1 May 13 22:05:03.245: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:03.245: INFO: Container node-exporter ready: true, restart count 0 May 13 22:05:03.245: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:05:03.245: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:05:03.245: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:05:03.245: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:05:03.245: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container kube-multus ready: true, restart count 1 May 13 22:05:03.245: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.245: INFO: Container coredns ready: true, restart count 1 May 13 22:05:03.327: INFO: Latency metrics for node master3 May 13 22:05:03.327: INFO: Logging node info for node node1 May 13 22:05:03.329: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 43090 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:55 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:55 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:55 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:04:55 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:05:03.330: INFO: Logging kubelet events for node node1 May 13 22:05:03.332: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:05:03.351: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:05:03.351: INFO: webserver-deployment-795d758f88-sj2r6 started at 2022-05-13 22:05:02 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: false, restart count 0 May 13 22:05:03.351: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:05:03.351: INFO: webserver-deployment-847dcfb7fb-hlqk4 started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: true, restart count 0 May 13 22:05:03.351: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:05:03.351: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.351: INFO: Container nodereport ready: true, restart count 0 May 13 22:05:03.351: INFO: Container reconcile ready: true, restart count 0 May 13 22:05:03.351: INFO: send-events-4160b205-2906-4e0c-aaec-6893e660274d started at 2022-05-13 22:04:40 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container p ready: true, restart count 0 May 13 22:05:03.351: INFO: ss2-0 started at 2022-05-13 22:04:34 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container webserver ready: true, restart count 0 May 13 22:05:03.351: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:05:03.351: INFO: Init container install-cni ready: true, restart count 2 May 13 22:05:03.351: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:05:03.351: INFO: webserver-deployment-847dcfb7fb-ktwv6 started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: true, restart count 0 May 13 22:05:03.351: INFO: pod-service-account-0f2c0345-bd7b-4fb6-a562-bed447527b40 started at 2022-05-13 22:04:53 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container test ready: true, restart count 0 May 13 22:05:03.351: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:05:03.351: INFO: ss-2 started at 2022-05-13 22:04:37 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container webserver ready: false, restart count 0 May 13 22:05:03.351: INFO: webserver-deployment-795d758f88-n2l85 started at 2022-05-13 22:05:02 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: false, restart count 0 May 13 22:05:03.351: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:05:03.351: INFO: Container collectd ready: true, restart count 0 May 13 22:05:03.351: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:05:03.351: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:05:03.351: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:05:03.351: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:05:03.351: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:03.351: INFO: Container node-exporter ready: true, restart count 0 May 13 22:05:03.351: INFO: webserver-deployment-847dcfb7fb-zs6qj started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: true, restart count 0 May 13 22:05:03.351: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container kube-multus ready: true, restart count 1 May 13 22:05:03.351: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:05:03.351: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:05:03.351: INFO: Container discover ready: false, restart count 0 May 13 22:05:03.351: INFO: Container init ready: false, restart count 0 May 13 22:05:03.351: INFO: Container install ready: false, restart count 0 May 13 22:05:03.351: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:05:03.351: INFO: Container config-reloader ready: true, restart count 0 May 13 22:05:03.351: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:05:03.351: INFO: Container grafana ready: true, restart count 0 May 13 22:05:03.351: INFO: Container prometheus ready: true, restart count 1 May 13 22:05:03.351: INFO: forbid-27541321-6qs6l started at 2022-05-13 22:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container c ready: true, restart count 0 May 13 22:05:03.351: INFO: webserver-deployment-795d758f88-bs76z started at 2022-05-13 22:05:02 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: false, restart count 0 May 13 22:05:03.351: INFO: webserver-deployment-847dcfb7fb-8drsz started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.351: INFO: Container httpd ready: true, restart count 0 May 13 22:05:03.351: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:05:03.352: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:05:04.340: INFO: Latency metrics for node node1 May 13 22:05:04.340: INFO: Logging node info for node node2 May 13 22:05:04.345: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 43150 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:04:57 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:04:57 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:05:04.345: INFO: Logging kubelet events for node node2 May 13 22:05:04.348: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:05:04.506: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.506: INFO: Container kube-multus ready: true, restart count 1 May 13 22:05:04.507: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container tas-extender ready: true, restart count 0 May 13 22:05:04.508: INFO: webserver-deployment-847dcfb7fb-njx2f started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container httpd ready: true, restart count 0 May 13 22:05:04.508: INFO: liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb started at 2022-05-13 22:03:54 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:05:04.508: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:05:04.508: INFO: sample-webhook-deployment-78988fc6cd-8r4mk started at 2022-05-13 22:05:00 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container sample-webhook ready: false, restart count 0 May 13 22:05:04.508: INFO: webserver-deployment-847dcfb7fb-xppgj started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container httpd ready: true, restart count 0 May 13 22:05:04.508: INFO: webserver-deployment-847dcfb7fb-bfncl started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container httpd ready: true, restart count 0 May 13 22:05:04.508: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:05:04.508: INFO: Init container install-cni ready: true, restart count 2 May 13 22:05:04.508: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:05:04.508: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:05:04.508: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:05:04.508: INFO: Container collectd ready: true, restart count 0 May 13 22:05:04.508: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:05:04.508: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:05:04.508: INFO: webserver-deployment-847dcfb7fb-ncfgc started at 2022-05-13 22:04:52 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container httpd ready: true, restart count 0 May 13 22:05:04.508: INFO: webserver-deployment-795d758f88-9bxth started at 2022-05-13 22:05:02 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.508: INFO: Container httpd ready: false, restart count 0 May 13 22:05:04.508: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:05:04.508: INFO: Container nodereport ready: true, restart count 0 May 13 22:05:04.508: INFO: Container reconcile ready: true, restart count 0 May 13 22:05:04.508: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:05:04.508: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:04.508: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:05:04.508: INFO: ss-1 started at 2022-05-13 22:04:31 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container webserver ready: false, restart count 0 May 13 22:05:04.509: INFO: ss-0 started at 2022-05-13 22:03:59 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container webserver ready: false, restart count 0 May 13 22:05:04.509: INFO: ss2-1 started at 2022-05-13 22:04:41 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container webserver ready: false, restart count 0 May 13 22:05:04.509: INFO: webserver-deployment-795d758f88-9sngn started at (0+0 container statuses recorded) May 13 22:05:04.509: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:05:04.509: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:05:04.509: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:05:04.509: INFO: Container discover ready: false, restart count 0 May 13 22:05:04.509: INFO: Container init ready: false, restart count 0 May 13 22:05:04.509: INFO: Container install ready: false, restart count 0 May 13 22:05:04.509: INFO: pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491 started at (0+0 container statuses recorded) May 13 22:05:04.509: INFO: ss2-2 started at 2022-05-13 22:04:44 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container webserver ready: true, restart count 0 May 13 22:05:04.509: INFO: test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe started at 2022-05-13 22:04:50 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container test-webserver ready: true, restart count 0 May 13 22:05:04.509: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:05:04.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:05:04.509: INFO: Container node-exporter ready: true, restart count 0 May 13 22:05:04.509: INFO: test-pod started at 2022-05-13 22:01:22 +0000 UTC (0+1 container statuses recorded) May 13 22:05:04.509: INFO: Container webserver ready: true, restart count 0 May 13 22:05:05.253: INFO: Latency metrics for node node2 May 13 22:05:05.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6770" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [154.290 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:04:52.080: Unexpected error: <*errors.errorString | 0xc000b87300>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30991 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30991 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":262,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:04.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-86bb84bb-1635-48d1-8128-5d172c991e38 STEP: Creating a pod to test consume secrets May 13 22:05:04.103: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491" in namespace "projected-1019" to be "Succeeded or Failed" May 13 22:05:04.107: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259637ms May 13 22:05:06.111: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008205132s May 13 22:05:08.115: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011791132s May 13 22:05:10.118: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014824029s May 13 22:05:12.122: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019703766s May 13 22:05:14.125: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022379867s May 13 22:05:16.128: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025206264s STEP: Saw pod success May 13 22:05:16.128: INFO: Pod "pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491" satisfied condition "Succeeded or Failed" May 13 22:05:16.131: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491 container projected-secret-volume-test: STEP: delete the pod May 13 22:05:16.144: INFO: Waiting for pod pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491 to disappear May 13 22:05:16.146: INFO: Pod pod-projected-secrets-5f7be894-3a8d-4e9e-a26b-4347963b7491 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:16.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1019" for this suite. • [SLOW TEST:12.095 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":489,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:05.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-1db91caa-42fa-41f4-971e-5758650a611f STEP: Creating a pod to test consume secrets May 13 22:05:05.340: INFO: Waiting up to 5m0s for pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd" in namespace "secrets-410" to be "Succeeded or Failed" May 13 22:05:05.342: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364464ms May 13 22:05:07.345: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005616526s May 13 22:05:09.348: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008337048s May 13 22:05:11.356: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016137319s May 13 22:05:13.359: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019469162s May 13 22:05:15.362: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022768267s May 13 22:05:17.366: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026458258s STEP: Saw pod success May 13 22:05:17.366: INFO: Pod "pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd" satisfied condition "Succeeded or Failed" May 13 22:05:17.368: INFO: Trying to get logs from node node2 pod pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd container secret-env-test: STEP: delete the pod May 13 22:05:17.381: INFO: Waiting for pod pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd to disappear May 13 22:05:17.383: INFO: Pod pod-secrets-4a61a9b9-e854-4818-a96f-ebdaeb525efd no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:17.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-410" for this suite. • [SLOW TEST:12.089 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":275,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:04.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 13 22:05:04.816: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:06.819: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:08.820: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:10.823: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:12.820: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:14.821: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:16.821: INFO: The status of Pod annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612 is Running (Ready = true) May 13 22:05:17.350: INFO: Successfully updated pod "annotationupdate649fd0d5-4d16-4db5-9dae-ebaaf4a27612" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:19.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-501" for this suite. • [SLOW TEST:14.589 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":558,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:59.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:05:00.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:05:02.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:05:04.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:05:06.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:05:08.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:05:10.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:05:13.205: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 13 22:05:21.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-6307 attach --namespace=webhook-6307 to-be-attached-pod -i -c=container1' May 13 22:05:21.434: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:21.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6307" for this suite. STEP: Destroying namespace "webhook-6307-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.577 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":27,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:16.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 13 22:05:16.688: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:05:16.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:05:18.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:05:20.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076316, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:05:23.717: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:23.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1275" for this suite. STEP: Destroying namespace "webhook-1275-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.670 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":31,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:19.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 13 22:05:19.430: INFO: Waiting up to 5m0s for pod "pod-e4819687-0283-4e2e-8965-56ac44fad653" in namespace "emptydir-4337" to be "Succeeded or Failed" May 13 22:05:19.432: INFO: Pod "pod-e4819687-0283-4e2e-8965-56ac44fad653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564072ms May 13 22:05:21.436: INFO: Pod "pod-e4819687-0283-4e2e-8965-56ac44fad653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006626206s May 13 22:05:23.442: INFO: Pod "pod-e4819687-0283-4e2e-8965-56ac44fad653": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011929585s May 13 22:05:25.447: INFO: Pod "pod-e4819687-0283-4e2e-8965-56ac44fad653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016987476s STEP: Saw pod success May 13 22:05:25.447: INFO: Pod "pod-e4819687-0283-4e2e-8965-56ac44fad653" satisfied condition "Succeeded or Failed" May 13 22:05:25.449: INFO: Trying to get logs from node node2 pod pod-e4819687-0283-4e2e-8965-56ac44fad653 container test-container: STEP: delete the pod May 13 22:05:25.461: INFO: Waiting for pod pod-e4819687-0283-4e2e-8965-56ac44fad653 to disappear May 13 22:05:25.463: INFO: Pod pod-e4819687-0283-4e2e-8965-56ac44fad653 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:25.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4337" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:17.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 13 22:05:17.800: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:05:17.813: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:05:19.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076317, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076317, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076317, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076317, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:05:22.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:05:22.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5317-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:30.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9708" for this suite. STEP: Destroying namespace "webhook-9708-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.513 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":15,"skipped":298,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:30.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 13 22:05:31.037: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 13 22:05:31.040: INFO: starting watch STEP: patching STEP: updating May 13 22:05:31.051: INFO: waiting for watch events with expected annotations May 13 22:05:31.051: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:31.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5455" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":16,"skipped":313,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:25.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod May 13 22:05:25.589: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:27.593: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:29.594: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod May 13 22:05:29.611: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 13 22:05:31.616: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 13 22:05:31.619: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:31.619: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:31.848: INFO: Exec stderr: "" May 13 22:05:31.848: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:31.848: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:31.958: INFO: Exec stderr: "" May 13 22:05:31.958: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:31.958: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.174: INFO: Exec stderr: "" May 13 22:05:32.174: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.174: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.301: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 13 22:05:32.301: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.301: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.389: INFO: Exec stderr: "" May 13 22:05:32.389: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.389: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.472: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 13 22:05:32.472: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.472: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.549: INFO: Exec stderr: "" May 13 22:05:32.549: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.549: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.635: INFO: Exec stderr: "" May 13 22:05:32.635: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.635: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.716: INFO: Exec stderr: "" May 13 22:05:32.716: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-667 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:05:32.716: INFO: >>> kubeConfig: /root/.kube/config May 13 22:05:32.795: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:32.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-667" for this suite. • [SLOW TEST:7.264 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":601,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:59.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4816 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4816 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4816 May 13 22:03:59.422: INFO: Found 0 stateful pods, waiting for 1 May 13 22:04:09.426: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 13 22:04:09.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:04:09.877: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:04:09.877: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:04:09.877: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:04:09.880: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 13 22:04:19.885: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 22:04:19.885: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:04:19.896: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999497s May 13 22:04:20.900: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996975175s May 13 22:04:21.904: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993037924s May 13 22:04:22.939: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98900686s May 13 22:04:23.943: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.953664476s May 13 22:04:24.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948759176s May 13 22:04:25.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.944746148s May 13 22:04:26.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941961366s May 13 22:04:27.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.937206838s May 13 22:04:28.965: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.311863ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4816 May 13 22:04:29.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:04:30.733: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:04:30.733: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:04:30.733: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:04:30.736: INFO: Found 1 stateful pods, waiting for 3 May 13 22:04:40.740: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:40.740: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:40.740: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 13 22:04:50.740: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:50.740: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:50.740: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 13 22:04:50.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:04:51.016: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:04:51.016: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:04:51.016: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:04:51.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:04:51.467: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:04:51.467: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:04:51.467: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:04:51.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:04:51.708: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:04:51.709: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:04:51.709: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:04:51.709: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:04:51.711: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 13 22:05:01.719: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 22:05:01.719: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 13 22:05:01.719: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 13 22:05:01.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999475s May 13 22:05:02.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996748659s May 13 22:05:03.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993321262s May 13 22:05:04.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984900201s May 13 22:05:05.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980637597s May 13 22:05:06.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977353925s May 13 22:05:07.755: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974296198s May 13 22:05:08.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968352136s May 13 22:05:09.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.963501448s May 13 22:05:10.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 959.079623ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4816 May 13 22:05:11.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:05:12.196: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:05:12.196: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:05:12.196: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:05:12.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:05:12.673: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:05:12.674: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:05:12.674: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:05:12.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4816 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:05:13.158: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:05:13.158: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:05:13.158: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:05:13.158: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:05:33.181: INFO: Deleting all statefulset in ns statefulset-4816 May 13 22:05:33.184: INFO: Scaling statefulset ss to 0 May 13 22:05:33.194: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:05:33.197: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:33.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4816" for this suite. • [SLOW TEST:93.871 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":23,"skipped":474,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:32.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:05:32.839: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:38.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8763" for this suite. • [SLOW TEST:6.047 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":33,"skipped":604,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:38.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-3c64760d-06c1-4a0b-bfbe-9d57e7e17465 STEP: Creating a pod to test consume configMaps May 13 22:05:38.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe" in namespace "configmap-3317" to be "Succeeded or Failed" May 13 22:05:38.923: INFO: Pod "pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395444ms May 13 22:05:40.929: INFO: Pod "pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01245789s May 13 22:05:42.932: INFO: Pod "pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015700739s STEP: Saw pod success May 13 22:05:42.932: INFO: Pod "pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe" satisfied condition "Succeeded or Failed" May 13 22:05:42.935: INFO: Trying to get logs from node node1 pod pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe container agnhost-container: STEP: delete the pod May 13 22:05:42.957: INFO: Waiting for pod pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe to disappear May 13 22:05:42.959: INFO: Pod pod-configmaps-43505e4c-bc67-481b-8ccc-5815693fc1fe no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:42.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3317" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":608,"failed":0} SSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:42.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange STEP: Verifying LimitRange creation was observed May 13 22:05:43.018: INFO: observed the limitRanges list STEP: Fetching the LimitRange to ensure it has proper values May 13 22:05:43.022: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 13 22:05:43.022: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 13 22:05:43.038: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 13 22:05:43.038: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 13 22:05:43.050: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 13 22:05:43.050: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 13 22:05:50.099: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:05:50.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8520" for this suite. • [SLOW TEST:7.144 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":35,"skipped":611,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:50.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96 May 13 22:05:50.165: INFO: Pod name my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96: Found 0 pods out of 1 May 13 22:05:55.168: INFO: Pod name my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96: Found 1 pods out of 1 May 13 22:05:55.168: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96" are running May 13 22:05:55.170: INFO: Pod "my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96-lkcxg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:05:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:05:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:05:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 22:05:50 +0000 UTC Reason: Message:}]) May 13 22:05:55.171: INFO: Trying to dial the pod May 13 22:06:00.183: INFO: Controller my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96: Got expected result from replica 1 [my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96-lkcxg]: "my-hostname-basic-bdcee6bb-9488-4592-a241-c7ef38f3ac96-lkcxg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:00.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3188" for this suite. • [SLOW TEST:10.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":36,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:00:54.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0513 22:00:54.102461 39 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:02.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5021" for this suite. • [SLOW TEST:308.064 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:33.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4595 STEP: creating service affinity-clusterip-transition in namespace services-4595 STEP: creating replication controller affinity-clusterip-transition in namespace services-4595 I0513 22:05:33.341902 23 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4595, replica count: 3 I0513 22:05:36.393264 23 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:05:39.394127 23 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:05:42.394705 23 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:05:42.399: INFO: Creating new exec pod May 13 22:05:47.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4595 exec execpod-affinitycndsg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' May 13 22:05:47.658: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 13 22:05:47.658: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:05:47.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4595 exec execpod-affinitycndsg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.43.51 80' May 13 22:05:47.901: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.43.51 80\nConnection to 10.233.43.51 80 port [tcp/http] succeeded!\n" May 13 22:05:47.901: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:05:47.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4595 exec execpod-affinitycndsg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.43.51:80/ ; done' May 13 22:05:48.211: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n" May 13 22:05:48.212: INFO: stdout: "\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-qdp65\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-qdp65\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-92x76\naffinity-clusterip-transition-qdp65\naffinity-clusterip-transition-6r9s4" May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-qdp65 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-qdp65 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-92x76 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-qdp65 May 13 22:05:48.212: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4595 exec execpod-affinitycndsg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.43.51:80/ ; done' May 13 22:05:48.529: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.51:80/\n" May 13 22:05:48.530: INFO: stdout: "\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4\naffinity-clusterip-transition-6r9s4" May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Received response from host: affinity-clusterip-transition-6r9s4 May 13 22:05:48.530: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4595, will wait for the garbage collector to delete the pods May 13 22:05:48.594: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.177507ms May 13 22:05:48.695: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.096442ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:02.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4595" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.208 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":494,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:02.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready May 13 22:06:02.196: INFO: observed Pod pod-test in namespace pods-3754 in phase Pending with labels: map[test-pod-static:true] & conditions [] May 13 22:06:02.201: INFO: observed Pod pod-test in namespace pods-3754 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC }] May 13 22:06:02.554: INFO: observed Pod pod-test in namespace pods-3754 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC }] May 13 22:06:04.707: INFO: observed Pod pod-test in namespace pods-3754 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC }] May 13 22:06:06.047: INFO: Found Pod pod-test in namespace pods-3754 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:06:02 +0000 UTC }] STEP: patching the Pod with a new Label and updated data May 13 22:06:06.059: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted May 13 22:06:06.078: INFO: observed event type ADDED May 13 22:06:06.078: INFO: observed event type MODIFIED May 13 22:06:06.078: INFO: observed event type MODIFIED May 13 22:06:06.079: INFO: observed event type MODIFIED May 13 22:06:06.079: INFO: observed event type MODIFIED May 13 22:06:06.079: INFO: observed event type MODIFIED May 13 22:06:06.079: INFO: observed event type MODIFIED May 13 22:06:06.079: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:06.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3754" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:06.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: May 13 22:06:06.144: INFO: Waiting up to 5m0s for pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375" in namespace "svcaccounts-7207" to be "Succeeded or Failed" May 13 22:06:06.149: INFO: Pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.858179ms May 13 22:06:08.153: INFO: Pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008986977s May 13 22:06:10.161: INFO: Pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016764773s May 13 22:06:12.166: INFO: Pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021794155s STEP: Saw pod success May 13 22:06:12.166: INFO: Pod "test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375" satisfied condition "Succeeded or Failed" May 13 22:06:12.168: INFO: Trying to get logs from node node2 pod test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375 container agnhost-container: STEP: delete the pod May 13 22:06:12.182: INFO: Waiting for pod test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375 to disappear May 13 22:06:12.184: INFO: Pod test-pod-c5314afe-e13d-4989-9049-8e1ded2a4375 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:12.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7207" for this suite. • [SLOW TEST:6.083 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:02.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 13 22:06:02.584: INFO: The status of Pod pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:04.588: INFO: The status of Pod pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:06.589: INFO: The status of Pod pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:08.591: INFO: The status of Pod pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 13 22:06:09.109: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb" May 13 22:06:09.109: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb" in namespace "pods-7961" to be "terminated due to deadline exceeded" May 13 22:06:09.112: INFO: Pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb": Phase="Running", Reason="", readiness=true. Elapsed: 2.81247ms May 13 22:06:11.118: INFO: Pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008168295s May 13 22:06:13.123: INFO: Pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.013918063s May 13 22:06:13.123: INFO: Pod "pod-update-activedeadlineseconds-f0e7a232-0f1c-4e12-bff3-e3c407b0e3eb" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:13.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7961" for this suite. • [SLOW TEST:10.583 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":508,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:12.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 13 22:06:12.262: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:14.266: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:16.266: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 13 22:06:16.282: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:18.285: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:20.288: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 13 22:06:20.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 22:06:20.300: INFO: Pod pod-with-prestop-exec-hook still exists May 13 22:06:22.302: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 22:06:22.306: INFO: Pod pod-with-prestop-exec-hook still exists May 13 22:06:24.302: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 13 22:06:24.305: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:24.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8415" for this suite. • [SLOW TEST:12.097 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:00.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-zptb STEP: Creating a pod to test atomic-volume-subpath May 13 22:06:00.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zptb" in namespace "subpath-1316" to be "Succeeded or Failed" May 13 22:06:00.301: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167456ms May 13 22:06:02.303: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004422517s May 13 22:06:04.306: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007904731s May 13 22:06:06.310: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 6.01186709s May 13 22:06:08.314: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 8.015714162s May 13 22:06:10.319: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 10.020903601s May 13 22:06:12.323: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 12.024492163s May 13 22:06:14.326: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 14.02724629s May 13 22:06:16.329: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 16.030429668s May 13 22:06:18.333: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 18.034507352s May 13 22:06:20.337: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 20.038188695s May 13 22:06:22.341: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Running", Reason="", readiness=true. Elapsed: 22.042858813s May 13 22:06:24.344: INFO: Pod "pod-subpath-test-secret-zptb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.046013871s STEP: Saw pod success May 13 22:06:24.344: INFO: Pod "pod-subpath-test-secret-zptb" satisfied condition "Succeeded or Failed" May 13 22:06:24.347: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-zptb container test-container-subpath-secret-zptb: STEP: delete the pod May 13 22:06:24.360: INFO: Waiting for pod pod-subpath-test-secret-zptb to disappear May 13 22:06:24.362: INFO: Pod pod-subpath-test-secret-zptb no longer exists STEP: Deleting pod pod-subpath-test-secret-zptb May 13 22:06:24.362: INFO: Deleting pod "pod-subpath-test-secret-zptb" in namespace "subpath-1316" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:24.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1316" for this suite. • [SLOW TEST:24.114 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:01:22.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5244 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5244 STEP: Creating statefulset with conflicting port in namespace statefulset-5244 STEP: Waiting until pod test-pod will start running in namespace statefulset-5244 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5244 May 13 22:06:26.489: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00125b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00125b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00125b200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:06:26.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5244 describe po test-pod' May 13 22:06:26.691: INFO: stderr: "" May 13 22:06:26.691: INFO: stdout: "Name: test-pod\nNamespace: statefulset-5244\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 13 May 2022 22:01:22 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.30\"\n ],\n \"mac\": \"96:44:e6:f2:40:a9\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.30\"\n ],\n \"mac\": \"96:44:e6:f2:40:a9\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.30\nIPs:\n IP: 10.244.4.30\nContainers:\n webserver:\n Container ID: docker://7cb6bff65597898cbfdf344598be2d971863261d6bed1355cee15fa772fb906e\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 13 May 2022 22:01:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdjv9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bdjv9:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 296.442871ms\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m1s kubelet Started container webserver\n" May 13 22:06:26.691: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-5244 Priority: 0 Node: node2/10.10.190.208 Start Time: Fri, 13 May 2022 22:01:22 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.30" ], "mac": "96:44:e6:f2:40:a9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.30" ], "mac": "96:44:e6:f2:40:a9", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.30 IPs: IP: 10.244.4.30 Containers: webserver: Container ID: docker://7cb6bff65597898cbfdf344598be2d971863261d6bed1355cee15fa772fb906e Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 13 May 2022 22:01:25 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdjv9 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-bdjv9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m2s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 296.442871ms Normal Created 5m2s kubelet Created container webserver Normal Started 5m1s kubelet Started container webserver May 13 22:06:26.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5244 logs test-pod --tail=100' May 13 22:06:26.861: INFO: stderr: "" May 13 22:06:26.861: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.30. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.30. Set the 'ServerName' directive globally to suppress this message\n[Fri May 13 22:01:25.176617 2022] [mpm_event:notice] [pid 1:tid 140516650797928] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri May 13 22:01:25.176654 2022] [core:notice] [pid 1:tid 140516650797928] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 13 22:06:26.861: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.30. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.30. Set the 'ServerName' directive globally to suppress this message [Fri May 13 22:01:25.176617 2022] [mpm_event:notice] [pid 1:tid 140516650797928] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri May 13 22:01:25.176654 2022] [core:notice] [pid 1:tid 140516650797928] AH00094: Command line: 'httpd -D FOREGROUND' May 13 22:06:26.861: INFO: Deleting all statefulset in ns statefulset-5244 May 13 22:06:26.864: INFO: Scaling statefulset ss to 0 May 13 22:06:26.873: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:06:26.876: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-5244". STEP: Found 7 events. May 13 22:06:26.888: INFO: At 2022-05-13 22:01:22 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] May 13 22:06:26.888: INFO: At 2022-05-13 22:01:22 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] May 13 22:06:26.888: INFO: At 2022-05-13 22:01:22 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] May 13 22:06:26.888: INFO: At 2022-05-13 22:01:24 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" May 13 22:06:26.888: INFO: At 2022-05-13 22:01:24 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 296.442871ms May 13 22:06:26.888: INFO: At 2022-05-13 22:01:24 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver May 13 22:06:26.888: INFO: At 2022-05-13 22:01:25 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver May 13 22:06:26.891: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:06:26.891: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:01:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:01:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:01:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:01:22 +0000 UTC }] May 13 22:06:26.891: INFO: May 13 22:06:26.897: INFO: Logging node info for node master1 May 13 22:06:26.900: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 45322 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:06:26.901: INFO: Logging kubelet events for node master1 May 13 22:06:26.902: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:06:26.916: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:06:26.916: INFO: Init container install-cni ready: true, restart count 2 May 13 22:06:26.916: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:06:26.916: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-multus ready: true, restart count 1 May 13 22:06:26.916: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:06:26.916: INFO: Container docker-registry ready: true, restart count 0 May 13 22:06:26.916: INFO: Container nginx ready: true, restart count 0 May 13 22:06:26.916: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:06:26.916: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:06:26.916: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:06:26.916: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:06:26.916: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:06:26.916: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:06:26.916: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:06:26.916: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:26.916: INFO: Container node-exporter ready: true, restart count 0 May 13 22:06:27.000: INFO: Latency metrics for node master1 May 13 22:06:27.000: INFO: Logging node info for node master2 May 13 22:06:27.003: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 45320 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:06:23 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:06:27.003: INFO: Logging kubelet events for node master2 May 13 22:06:27.005: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:06:27.021: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:06:27.021: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:06:27.021: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:06:27.021: INFO: Init container install-cni ready: true, restart count 2 May 13 22:06:27.021: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:06:27.021: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-multus ready: true, restart count 1 May 13 22:06:27.021: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container coredns ready: true, restart count 1 May 13 22:06:27.021: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:06:27.021: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:06:27.021: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.021: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:27.021: INFO: Container node-exporter ready: true, restart count 0 May 13 22:06:27.110: INFO: Latency metrics for node master2 May 13 22:06:27.110: INFO: Logging node info for node master3 May 13 22:06:27.113: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 45303 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:21 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:21 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:21 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:06:21 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:06:27.114: INFO: Logging kubelet events for node master3 May 13 22:06:27.116: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:06:27.126: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:06:27.126: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-multus ready: true, restart count 1 May 13 22:06:27.126: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container coredns ready: true, restart count 1 May 13 22:06:27.126: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:06:27.126: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:06:27.126: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:06:27.126: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:06:27.126: INFO: Init container install-cni ready: true, restart count 0 May 13 22:06:27.126: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:06:27.126: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.126: INFO: Container autoscaler ready: true, restart count 1 May 13 22:06:27.126: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.126: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:27.126: INFO: Container node-exporter ready: true, restart count 0 May 13 22:06:27.221: INFO: Latency metrics for node master3 May 13 22:06:27.221: INFO: Logging node info for node node1 May 13 22:06:27.225: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 45371 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:25 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:25 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:25 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:06:25 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:06:27.225: INFO: Logging kubelet events for node node1 May 13 22:06:27.227: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:06:27.245: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:06:27.245: INFO: Container collectd ready: true, restart count 0 May 13 22:06:27.245: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:06:27.245: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:06:27.245: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container kube-multus ready: true, restart count 1 May 13 22:06:27.245: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:06:27.245: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.245: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:27.245: INFO: Container node-exporter ready: true, restart count 0 May 13 22:06:27.245: INFO: pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580 started at 2022-05-13 22:06:24 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container agnhost-container ready: false, restart count 0 May 13 22:06:27.245: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:06:27.245: INFO: externalname-service-nl6sq started at 2022-05-13 22:05:23 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container externalname-service ready: true, restart count 0 May 13 22:06:27.245: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:06:27.245: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:06:27.245: INFO: Container discover ready: false, restart count 0 May 13 22:06:27.245: INFO: Container init ready: false, restart count 0 May 13 22:06:27.245: INFO: Container install ready: false, restart count 0 May 13 22:06:27.245: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:06:27.245: INFO: Container config-reloader ready: true, restart count 0 May 13 22:06:27.245: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:06:27.245: INFO: Container grafana ready: true, restart count 0 May 13 22:06:27.245: INFO: Container prometheus ready: true, restart count 1 May 13 22:06:27.245: INFO: execpodcnzxk started at 2022-05-13 22:05:30 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:06:27.245: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:06:27.245: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:06:27.245: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:06:27.245: INFO: netserver-0 started at 2022-05-13 22:06:13 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container webserver ready: false, restart count 0 May 13 22:06:27.245: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:06:27.245: INFO: Init container install-cni ready: true, restart count 2 May 13 22:06:27.245: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:06:27.245: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.245: INFO: Container nodereport ready: true, restart count 0 May 13 22:06:27.245: INFO: Container reconcile ready: true, restart count 0 May 13 22:06:27.245: INFO: ss2-0 started at 2022-05-13 22:05:43 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container webserver ready: true, restart count 0 May 13 22:06:27.245: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.245: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:06:27.409: INFO: Latency metrics for node node1 May 13 22:06:27.409: INFO: Logging node info for node node2 May 13 22:06:27.412: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 45287 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:18 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:18 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:06:18 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:06:18 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:06:27.413: INFO: Logging kubelet events for node node2 May 13 22:06:27.415: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:06:27.431: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:06:27.431: INFO: Init container install-cni ready: true, restart count 2 May 13 22:06:27.431: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:06:27.431: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:06:27.431: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:06:27.431: INFO: Container collectd ready: true, restart count 0 May 13 22:06:27.431: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:06:27.431: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:06:27.431: INFO: concurrent-27541326-79b8x started at 2022-05-13 22:06:00 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container c ready: true, restart count 0 May 13 22:06:27.431: INFO: externalname-service-sjxpv started at 2022-05-13 22:05:24 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container externalname-service ready: true, restart count 0 May 13 22:06:27.431: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.431: INFO: Container nodereport ready: true, restart count 0 May 13 22:06:27.431: INFO: Container reconcile ready: true, restart count 0 May 13 22:06:27.431: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.431: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:27.431: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:06:27.431: INFO: ss2-1 started at 2022-05-13 22:05:32 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container webserver ready: false, restart count 0 May 13 22:06:27.431: INFO: pod-handle-http-request started at 2022-05-13 22:06:24 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container agnhost-container ready: false, restart count 0 May 13 22:06:27.431: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:06:27.431: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:06:27.431: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:06:27.431: INFO: Container discover ready: false, restart count 0 May 13 22:06:27.431: INFO: Container init ready: false, restart count 0 May 13 22:06:27.431: INFO: Container install ready: false, restart count 0 May 13 22:06:27.431: INFO: test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe started at 2022-05-13 22:04:50 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container test-webserver ready: true, restart count 0 May 13 22:06:27.431: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:06:27.431: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:06:27.431: INFO: Container node-exporter ready: true, restart count 0 May 13 22:06:27.431: INFO: test-pod started at 2022-05-13 22:01:22 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container webserver ready: true, restart count 0 May 13 22:06:27.431: INFO: pod-handle-http-request started at 2022-05-13 22:06:12 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:06:27.431: INFO: netserver-1 started at 2022-05-13 22:06:13 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container webserver ready: false, restart count 0 May 13 22:06:27.431: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container kube-multus ready: true, restart count 1 May 13 22:06:27.431: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container tas-extender ready: true, restart count 0 May 13 22:06:27.431: INFO: liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb started at 2022-05-13 22:03:54 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:06:27.431: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:06:27.431: INFO: ss2-2 started at 2022-05-13 22:06:22 +0000 UTC (0+1 container statuses recorded) May 13 22:06:27.431: INFO: Container webserver ready: true, restart count 0 May 13 22:06:27.700: INFO: Latency metrics for node node2 May 13 22:06:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5244" for this suite. • Failure [305.282 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:06:26.489: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":5,"skipped":101,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:24.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-4cb5b19f-b616-4413-ae31-7d067b93ce39 STEP: Creating a pod to test consume configMaps May 13 22:06:24.412: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580" in namespace "projected-3609" to be "Succeeded or Failed" May 13 22:06:24.414: INFO: Pod "pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580": Phase="Pending", Reason="", readiness=false. Elapsed: 1.869093ms May 13 22:06:26.417: INFO: Pod "pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005037157s May 13 22:06:28.421: INFO: Pod "pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008841159s STEP: Saw pod success May 13 22:06:28.421: INFO: Pod "pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580" satisfied condition "Succeeded or Failed" May 13 22:06:28.424: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580 container agnhost-container: STEP: delete the pod May 13 22:06:28.437: INFO: Waiting for pod pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580 to disappear May 13 22:06:28.439: INFO: Pod pod-projected-configmaps-7450f229-2549-4e23-b5c2-0c1279d0c580 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:28.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3609" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:31.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 13 22:05:31.135: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44454 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:05:31.135: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44454 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 13 22:05:41.146: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44691 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:05:41.146: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44691 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 13 22:05:51.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44866 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:05:51.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44866 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 13 22:06:01.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44966 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:06:01.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7469 a26e9960-4417-4186-af7c-20ca7198f97a 44966 0 2022-05-13 22:05:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-13 22:05:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 13 22:06:11.168: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7469 1928b6e2-6721-4f23-873c-bbdb71a2f48e 45157 0 2022-05-13 22:06:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-13 22:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:06:11.168: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7469 1928b6e2-6721-4f23-873c-bbdb71a2f48e 45157 0 2022-05-13 22:06:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-13 22:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 13 22:06:21.173: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7469 1928b6e2-6721-4f23-873c-bbdb71a2f48e 45299 0 2022-05-13 22:06:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-13 22:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 13 22:06:21.173: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7469 1928b6e2-6721-4f23-873c-bbdb71a2f48e 45299 0 2022-05-13 22:06:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-13 22:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:31.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7469" for this suite. • [SLOW TEST:60.075 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":17,"skipped":322,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:31.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 13 22:06:31.302: INFO: starting watch STEP: patching STEP: updating May 13 22:06:31.314: INFO: waiting for watch events with expected annotations May 13 22:06:31.314: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:31.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-7135" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":353,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:31.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-92cb16d5-1aef-4bfd-81b9-b017fd8f17a1 STEP: Creating a pod to test consume secrets May 13 22:06:31.391: INFO: Waiting up to 5m0s for pod "pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440" in namespace "secrets-3926" to be "Succeeded or Failed" May 13 22:06:31.393: INFO: Pod "pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440": Phase="Pending", Reason="", readiness=false. Elapsed: 1.696344ms May 13 22:06:33.396: INFO: Pod "pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005061237s May 13 22:06:35.401: INFO: Pod "pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009464539s STEP: Saw pod success May 13 22:06:35.401: INFO: Pod "pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440" satisfied condition "Succeeded or Failed" May 13 22:06:35.403: INFO: Trying to get logs from node node2 pod pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440 container secret-volume-test: STEP: delete the pod May 13 22:06:35.415: INFO: Waiting for pod pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440 to disappear May 13 22:06:35.417: INFO: Pod pod-secrets-b6e49d5d-2eea-496b-b2e2-9f524e9d4440 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3926" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":362,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:35.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy May 13 22:06:35.489: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3447 proxy --unix-socket=/tmp/kubectl-proxy-unix292501360/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:35.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3447" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":20,"skipped":380,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:24.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 13 22:06:24.493: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:26.497: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:28.497: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 13 22:06:28.510: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:30.516: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:32.515: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 13 22:06:32.719: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:06:32.722: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:06:34.722: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:06:34.725: INFO: Pod pod-with-poststart-exec-hook still exists May 13 22:06:36.723: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 13 22:06:36.725: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:36.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4518" for this suite. • [SLOW TEST:12.275 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":688,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:13.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8291 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 22:06:13.161: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 22:06:13.203: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:15.206: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:17.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:19.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:21.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:23.205: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:25.205: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:27.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:29.206: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:31.206: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:06:33.206: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 22:06:33.211: INFO: The status of Pod netserver-1 is Running (Ready = false) May 13 22:06:35.216: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 22:06:39.234: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 13 22:06:39.234: INFO: Breadth first check of 10.244.3.19 on host 10.10.190.207... May 13 22:06:39.237: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.24:9080/dial?request=hostname&protocol=udp&host=10.244.3.19&port=8081&tries=1'] Namespace:pod-network-test-8291 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:06:39.237: INFO: >>> kubeConfig: /root/.kube/config May 13 22:06:39.330: INFO: Waiting for responses: map[] May 13 22:06:39.330: INFO: reached 10.244.3.19 after 0/1 tries May 13 22:06:39.330: INFO: Breadth first check of 10.244.4.127 on host 10.10.190.208... May 13 22:06:39.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.24:9080/dial?request=hostname&protocol=udp&host=10.244.4.127&port=8081&tries=1'] Namespace:pod-network-test-8291 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:06:39.333: INFO: >>> kubeConfig: /root/.kube/config May 13 22:06:39.419: INFO: Waiting for responses: map[] May 13 22:06:39.419: INFO: reached 10.244.4.127 after 0/1 tries May 13 22:06:39.419: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:39.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8291" for this suite. • [SLOW TEST:26.291 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":509,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:27.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:40.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-170" for this suite. • [SLOW TEST:13.100 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":121,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:40.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:40.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-694" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":7,"skipped":135,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:39.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:06:39.501: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5d9bb091-4f2e-430c-b89e-fe50b944b1ca" in namespace "security-context-test-7910" to be "Succeeded or Failed" May 13 22:06:39.503: INFO: Pod "busybox-user-65534-5d9bb091-4f2e-430c-b89e-fe50b944b1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36686ms May 13 22:06:41.506: INFO: Pod "busybox-user-65534-5d9bb091-4f2e-430c-b89e-fe50b944b1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005454651s May 13 22:06:43.509: INFO: Pod "busybox-user-65534-5d9bb091-4f2e-430c-b89e-fe50b944b1ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00843227s May 13 22:06:43.509: INFO: Pod "busybox-user-65534-5d9bb091-4f2e-430c-b89e-fe50b944b1ca" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:43.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7910" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":530,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:40.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments May 13 22:06:40.998: INFO: Waiting up to 5m0s for pod "client-containers-2e819064-9792-4467-8b40-6b141e683719" in namespace "containers-8144" to be "Succeeded or Failed" May 13 22:06:41.000: INFO: Pod "client-containers-2e819064-9792-4467-8b40-6b141e683719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02995ms May 13 22:06:43.004: INFO: Pod "client-containers-2e819064-9792-4467-8b40-6b141e683719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005316459s May 13 22:06:45.008: INFO: Pod "client-containers-2e819064-9792-4467-8b40-6b141e683719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009591255s STEP: Saw pod success May 13 22:06:45.008: INFO: Pod "client-containers-2e819064-9792-4467-8b40-6b141e683719" satisfied condition "Succeeded or Failed" May 13 22:06:45.011: INFO: Trying to get logs from node node1 pod client-containers-2e819064-9792-4467-8b40-6b141e683719 container agnhost-container: STEP: delete the pod May 13 22:06:45.025: INFO: Waiting for pod client-containers-2e819064-9792-4467-8b40-6b141e683719 to disappear May 13 22:06:45.027: INFO: Pod client-containers-2e819064-9792-4467-8b40-6b141e683719 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:45.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8144" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":144,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:45.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:06:45.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3" in namespace "downward-api-8918" to be "Succeeded or Failed" May 13 22:06:45.144: INFO: Pod "downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152644ms May 13 22:06:47.146: INFO: Pod "downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005100535s May 13 22:06:49.150: INFO: Pod "downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00824956s STEP: Saw pod success May 13 22:06:49.150: INFO: Pod "downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3" satisfied condition "Succeeded or Failed" May 13 22:06:49.152: INFO: Trying to get logs from node node1 pod downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3 container client-container: STEP: delete the pod May 13 22:06:49.165: INFO: Waiting for pod downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3 to disappear May 13 22:06:49.167: INFO: Pod downwardapi-volume-ff29d237-5464-410c-bdf8-53c3838149a3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:49.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8918" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":193,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:43.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:06:43.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a" in namespace "projected-3716" to be "Succeeded or Failed" May 13 22:06:43.604: INFO: Pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.979201ms May 13 22:06:45.608: INFO: Pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009938225s May 13 22:06:47.610: INFO: Pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01278823s May 13 22:06:49.615: INFO: Pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016982524s STEP: Saw pod success May 13 22:06:49.615: INFO: Pod "downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a" satisfied condition "Succeeded or Failed" May 13 22:06:49.617: INFO: Trying to get logs from node node1 pod downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a container client-container: STEP: delete the pod May 13 22:06:49.630: INFO: Waiting for pod downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a to disappear May 13 22:06:49.632: INFO: Pod downwardapi-volume-5d4f2819-2b38-4a66-a1fb-97cce283921a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3716" for this suite. • [SLOW TEST:6.115 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":532,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:36.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:06:36.777: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 22:06:45.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2780 --namespace=crd-publish-openapi-2780 create -f -' May 13 22:06:45.904: INFO: stderr: "" May 13 22:06:45.904: INFO: stdout: "e2e-test-crd-publish-openapi-248-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 13 22:06:45.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2780 --namespace=crd-publish-openapi-2780 delete e2e-test-crd-publish-openapi-248-crds test-cr' May 13 22:06:46.087: INFO: stderr: "" May 13 22:06:46.087: INFO: stdout: "e2e-test-crd-publish-openapi-248-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 13 22:06:46.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2780 --namespace=crd-publish-openapi-2780 apply -f -' May 13 22:06:46.448: INFO: stderr: "" May 13 22:06:46.448: INFO: stdout: "e2e-test-crd-publish-openapi-248-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 13 22:06:46.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2780 --namespace=crd-publish-openapi-2780 delete e2e-test-crd-publish-openapi-248-crds test-cr' May 13 22:06:46.609: INFO: stderr: "" May 13 22:06:46.609: INFO: stdout: "e2e-test-crd-publish-openapi-248-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 13 22:06:46.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2780 explain e2e-test-crd-publish-openapi-248-crds' May 13 22:06:46.972: INFO: stderr: "" May 13 22:06:46.972: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-248-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2780" for this suite. • [SLOW TEST:14.889 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":39,"skipped":700,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:51.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:51.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9830" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":40,"skipped":700,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:51.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:51.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1519" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":41,"skipped":708,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:49.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:06:49.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769" in namespace "projected-8992" to be "Succeeded or Failed" May 13 22:06:49.232: INFO: Pod "downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191628ms May 13 22:06:51.235: INFO: Pod "downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005977742s May 13 22:06:53.239: INFO: Pod "downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009282447s STEP: Saw pod success May 13 22:06:53.239: INFO: Pod "downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769" satisfied condition "Succeeded or Failed" May 13 22:06:53.241: INFO: Trying to get logs from node node2 pod downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769 container client-container: STEP: delete the pod May 13 22:06:53.256: INFO: Waiting for pod downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769 to disappear May 13 22:06:53.258: INFO: Pod downwardapi-volume-399bd4b1-072a-45d2-ae80-edadfc016769 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8992" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":206,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:28.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-dqjh STEP: Creating a pod to test atomic-volume-subpath May 13 22:06:28.582: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dqjh" in namespace "subpath-9182" to be "Succeeded or Failed" May 13 22:06:28.584: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226333ms May 13 22:06:30.588: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005911287s May 13 22:06:32.592: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 4.010141255s May 13 22:06:34.597: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 6.014334902s May 13 22:06:36.601: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 8.01860081s May 13 22:06:38.610: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 10.028167948s May 13 22:06:40.615: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 12.032848752s May 13 22:06:42.620: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 14.038185728s May 13 22:06:44.624: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 16.041263958s May 13 22:06:46.627: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 18.04481069s May 13 22:06:48.630: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 20.047760963s May 13 22:06:50.634: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 22.05193655s May 13 22:06:52.638: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Running", Reason="", readiness=true. Elapsed: 24.056048844s May 13 22:06:54.642: INFO: Pod "pod-subpath-test-configmap-dqjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.059839446s STEP: Saw pod success May 13 22:06:54.642: INFO: Pod "pod-subpath-test-configmap-dqjh" satisfied condition "Succeeded or Failed" May 13 22:06:54.645: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-dqjh container test-container-subpath-configmap-dqjh: STEP: delete the pod May 13 22:06:54.666: INFO: Waiting for pod pod-subpath-test-configmap-dqjh to disappear May 13 22:06:54.668: INFO: Pod pod-subpath-test-configmap-dqjh no longer exists STEP: Deleting pod pod-subpath-test-configmap-dqjh May 13 22:06:54.668: INFO: Deleting pod "pod-subpath-test-configmap-dqjh" in namespace "subpath-9182" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:54.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9182" for this suite. • [SLOW TEST:26.136 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":157,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:49.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8062.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8062.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8062.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8062.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:06:55.760: INFO: DNS probes using dns-8062/dns-test-98860e0d-f332-4f3d-9aa4-a08681181957 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:55.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8062" for this suite. • [SLOW TEST:6.092 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":562,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:51.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-fd9f3266-8d5e-4bdd-bb87-2ae2b60344ca STEP: Creating a pod to test consume configMaps May 13 22:06:51.849: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083" in namespace "projected-5124" to be "Succeeded or Failed" May 13 22:06:51.851: INFO: Pod "pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290741ms May 13 22:06:53.855: INFO: Pod "pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006313929s May 13 22:06:55.858: INFO: Pod "pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008700599s STEP: Saw pod success May 13 22:06:55.858: INFO: Pod "pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083" satisfied condition "Succeeded or Failed" May 13 22:06:55.860: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083 container agnhost-container: STEP: delete the pod May 13 22:06:55.871: INFO: Waiting for pod pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083 to disappear May 13 22:06:55.873: INFO: Pod pod-projected-configmaps-66e4f5d0-7fb1-43a1-9208-4fee9eeac083 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:55.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5124" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":718,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:53.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-bdac761d-0e42-455e-a66e-1258f84d0e08 STEP: Creating a pod to test consume configMaps May 13 22:06:53.343: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7" in namespace "projected-1298" to be "Succeeded or Failed" May 13 22:06:53.345: INFO: Pod "pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165257ms May 13 22:06:55.349: INFO: Pod "pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005803822s May 13 22:06:57.353: INFO: Pod "pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009851896s STEP: Saw pod success May 13 22:06:57.353: INFO: Pod "pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7" satisfied condition "Succeeded or Failed" May 13 22:06:57.355: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7 container agnhost-container: STEP: delete the pod May 13 22:06:57.366: INFO: Waiting for pod pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7 to disappear May 13 22:06:57.368: INFO: Pod pod-projected-configmaps-78d1a989-7c97-45b1-81df-3bd98777a2f7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:57.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1298" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":228,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:55.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args May 13 22:06:55.826: INFO: Waiting up to 5m0s for pod "var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853" in namespace "var-expansion-9215" to be "Succeeded or Failed" May 13 22:06:55.829: INFO: Pod "var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162894ms May 13 22:06:57.832: INFO: Pod "var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006010098s May 13 22:06:59.836: INFO: Pod "var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009674076s STEP: Saw pod success May 13 22:06:59.836: INFO: Pod "var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853" satisfied condition "Succeeded or Failed" May 13 22:06:59.838: INFO: Trying to get logs from node node1 pod var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853 container dapi-container: STEP: delete the pod May 13 22:06:59.852: INFO: Waiting for pod var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853 to disappear May 13 22:06:59.854: INFO: Pod var-expansion-90f04f80-47ba-4610-beea-c84a3cbbb853 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:06:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9215" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":565,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:21.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0513 22:05:21.644862 36 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:01.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-270" for this suite. • [SLOW TEST:100.051 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":28,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:55.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-4553/configmap-test-6a7f4776-1043-42d7-98be-d2e06f0b5f56 STEP: Creating a pod to test consume configMaps May 13 22:06:55.917: INFO: Waiting up to 5m0s for pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73" in namespace "configmap-4553" to be "Succeeded or Failed" May 13 22:06:55.919: INFO: Pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506693ms May 13 22:06:57.923: INFO: Pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005891007s May 13 22:06:59.925: INFO: Pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008179261s May 13 22:07:01.929: INFO: Pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011590312s STEP: Saw pod success May 13 22:07:01.929: INFO: Pod "pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73" satisfied condition "Succeeded or Failed" May 13 22:07:01.931: INFO: Trying to get logs from node node1 pod pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73 container env-test: STEP: delete the pod May 13 22:07:01.992: INFO: Waiting for pod pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73 to disappear May 13 22:07:01.994: INFO: Pod pod-configmaps-71da8e39-9a04-468f-97fe-c32e03cfab73 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:01.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4553" for this suite. • [SLOW TEST:6.118 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":719,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:57.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 13 22:06:57.437: INFO: The status of Pod labelsupdate5ff9cd90-72b9-4ce8-9a3c-e3e732d18aba is Pending, waiting for it to be Running (with Ready = true) May 13 22:06:59.440: INFO: The status of Pod labelsupdate5ff9cd90-72b9-4ce8-9a3c-e3e732d18aba is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:01.442: INFO: The status of Pod labelsupdate5ff9cd90-72b9-4ce8-9a3c-e3e732d18aba is Running (Ready = true) May 13 22:07:01.963: INFO: Successfully updated pod "labelsupdate5ff9cd90-72b9-4ce8-9a3c-e3e732d18aba" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:03.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7394" for this suite. • [SLOW TEST:6.579 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":241,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:59.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:01.916: INFO: Deleting pod "var-expansion-fd6fe147-f716-4444-86b0-08e7de37f4ff" in namespace "var-expansion-8164" May 13 22:07:01.921: INFO: Wait up to 5m0s for pod "var-expansion-fd6fe147-f716-4444-86b0-08e7de37f4ff" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:05.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8164" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":31,"skipped":572,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:02.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:07:02.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59" in namespace "downward-api-1343" to be "Succeeded or Failed" May 13 22:07:02.043: INFO: Pod "downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59": Phase="Pending", Reason="", readiness=false. Elapsed: 1.832264ms May 13 22:07:04.046: INFO: Pod "downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0051004s May 13 22:07:06.051: INFO: Pod "downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009795008s STEP: Saw pod success May 13 22:07:06.051: INFO: Pod "downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59" satisfied condition "Succeeded or Failed" May 13 22:07:06.054: INFO: Trying to get logs from node node2 pod downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59 container client-container: STEP: delete the pod May 13 22:07:06.066: INFO: Waiting for pod downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59 to disappear May 13 22:07:06.068: INFO: Pod downwardapi-volume-b7a008d0-4981-4f3b-a657-c18766632d59 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:06.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1343" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":721,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:03.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7430.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7430.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:07:08.062: INFO: DNS probes using dns-7430/dns-test-b2d0749e-326b-419d-82cb-33bb5e67637b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:08.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7430" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":13,"skipped":246,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:54.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2658" for this suite. • [SLOW TEST:16.117 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":15,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:08.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-03ede38f-0df0-4c09-bf27-c0954189f19e STEP: Creating a pod to test consume secrets May 13 22:07:08.119: INFO: Waiting up to 5m0s for pod "pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe" in namespace "secrets-7929" to be "Succeeded or Failed" May 13 22:07:08.121: INFO: Pod "pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318296ms May 13 22:07:10.125: INFO: Pod "pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005949028s May 13 22:07:12.128: INFO: Pod "pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009085367s STEP: Saw pod success May 13 22:07:12.128: INFO: Pod "pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe" satisfied condition "Succeeded or Failed" May 13 22:07:12.130: INFO: Trying to get logs from node node1 pod pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe container secret-volume-test: STEP: delete the pod May 13 22:07:12.142: INFO: Waiting for pod pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe to disappear May 13 22:07:12.143: INFO: Pod pod-secrets-3548a93d-7753-49ea-8f7e-562569e6fafe no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:12.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7929" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":247,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:06.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-30eab484-f93e-4ba3-849b-41102c1088d4 STEP: Creating a pod to test consume configMaps May 13 22:07:06.122: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c" in namespace "configmap-1686" to be "Succeeded or Failed" May 13 22:07:06.124: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078795ms May 13 22:07:08.126: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004578323s May 13 22:07:10.130: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00860357s May 13 22:07:12.135: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012941208s May 13 22:07:14.139: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017263202s STEP: Saw pod success May 13 22:07:14.139: INFO: Pod "pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c" satisfied condition "Succeeded or Failed" May 13 22:07:14.142: INFO: Trying to get logs from node node2 pod pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c container agnhost-container: STEP: delete the pod May 13 22:07:14.155: INFO: Waiting for pod pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c to disappear May 13 22:07:14.157: INFO: Pod pod-configmaps-a6d6c6f4-02ff-4a8b-bff2-2dc25b987a0c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:14.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1686" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:34.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-810 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 13 22:04:34.029: INFO: Found 0 stateful pods, waiting for 3 May 13 22:04:44.034: INFO: Found 2 stateful pods, waiting for 3 May 13 22:04:54.035: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:54.035: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:54.035: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 13 22:04:54.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-810 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:04:54.890: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:04:54.890: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:04:54.890: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 13 22:05:04.920: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 13 22:05:14.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-810 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:05:15.320: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:05:15.320: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:05:15.320: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:05:45.337: INFO: Waiting for StatefulSet statefulset-810/ss2 to complete update STEP: Rolling back to a previous revision May 13 22:05:55.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-810 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:05:55.832: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:05:55.832: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:05:55.832: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:06:05.869: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 13 22:06:15.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-810 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:06:16.132: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:06:16.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:06:16.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:06:36.150: INFO: Waiting for StatefulSet statefulset-810/ss2 to complete update May 13 22:06:36.150: INFO: Waiting for Pod statefulset-810/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:06:46.156: INFO: Deleting all statefulset in ns statefulset-810 May 13 22:06:46.158: INFO: Scaling statefulset ss2 to 0 May 13 22:07:16.172: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:07:16.174: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:16.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-810" for this suite. • [SLOW TEST:162.193 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":12,"skipped":227,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:12.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod May 13 22:07:12.284: INFO: The status of Pod pod-hostip-7dd381bf-a979-426a-9042-fb039c2ed2a2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:14.297: INFO: The status of Pod pod-hostip-7dd381bf-a979-426a-9042-fb039c2ed2a2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:16.289: INFO: The status of Pod pod-hostip-7dd381bf-a979-426a-9042-fb039c2ed2a2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:18.286: INFO: The status of Pod pod-hostip-7dd381bf-a979-426a-9042-fb039c2ed2a2 is Running (Ready = true) May 13 22:07:18.292: INFO: Pod pod-hostip-7dd381bf-a979-426a-9042-fb039c2ed2a2 has hostIP: 10.10.190.207 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:18.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5540" for this suite. • [SLOW TEST:6.051 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":311,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6563 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6563 I0513 22:07:05.987215 23 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6563, replica count: 2 I0513 22:07:09.038851 23 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:07:12.040220 23 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:07:15.042999 23 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:07:15.043: INFO: Creating new exec pod May 13 22:07:20.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6563 exec execpoddspgv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 13 22:07:20.359: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 13 22:07:20.359: INFO: stdout: "externalname-service-mplrf" May 13 22:07:20.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6563 exec execpoddspgv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.55.225 80' May 13 22:07:20.617: INFO: stderr: "+ nc -v -t -w 2 10.233.55.225 80\n+ echo hostName\nConnection to 10.233.55.225 80 port [tcp/http] succeeded!\n" May 13 22:07:20.618: INFO: stdout: "externalname-service-bp62w" May 13 22:07:20.618: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:20.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6563" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:14.703 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":32,"skipped":576,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:20.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0513 22:07:20.691397 23 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching May 13 22:07:20.699: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 13 22:07:20.702: INFO: starting watch STEP: patching STEP: updating May 13 22:07:20.716: INFO: waiting for watch events with expected annotations May 13 22:07:20.716: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:20.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4116" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":33,"skipped":582,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:16.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota May 13 22:07:16.269: INFO: Pod name sample-pod: Found 0 pods out of 1 May 13 22:07:21.275: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:21.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6213" for this suite. • [SLOW TEST:5.058 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":13,"skipped":253,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 13 22:07:10.889: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:22.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1149" for this suite. • [SLOW TEST:11.512 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:20.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-41636317-196e-4d63-be37-06487abca061 STEP: Creating a pod to test consume secrets May 13 22:07:20.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a" in namespace "projected-9090" to be "Succeeded or Failed" May 13 22:07:20.796: INFO: Pod "pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358173ms May 13 22:07:22.799: INFO: Pod "pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005276058s May 13 22:07:24.802: INFO: Pod "pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008421816s STEP: Saw pod success May 13 22:07:24.802: INFO: Pod "pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a" satisfied condition "Succeeded or Failed" May 13 22:07:24.805: INFO: Trying to get logs from node node1 pod pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a container projected-secret-volume-test: STEP: delete the pod May 13 22:07:24.815: INFO: Waiting for pod pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a to disappear May 13 22:07:24.817: INFO: Pod pod-projected-secrets-dde4b4e8-4bc4-4b6e-a1a5-7570f286d76a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:24.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9090" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":583,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:18.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1284 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1284;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1284 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1284;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1284.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1284.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1284.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1284.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1284.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1284.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1284.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1284.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1284.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.37.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.37.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.37.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.37.17_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1284 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1284;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1284 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1284;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1284.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1284.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1284.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1284.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1284.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1284.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1284.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1284.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1284.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1284.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.37.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.37.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.37.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.37.17_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:07:26.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.415: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.493: INFO: Unable to read wheezy_udp@dns-test-service.dns-1284 from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1284 from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.509: INFO: Unable to read wheezy_udp@dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.515: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.522: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.540: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.544: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.546: INFO: Unable to read jessie_udp@dns-test-service.dns-1284 from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-1284 from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.551: INFO: Unable to read jessie_udp@dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.553: INFO: Unable to read jessie_tcp@dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1284.svc from pod dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99: the server could not find the requested resource (get pods dns-test-cb732022-0c75-4782-8140-5cc469d3ad99) May 13 22:07:26.570: INFO: Lookups using dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1284 wheezy_tcp@dns-test-service.dns-1284 wheezy_udp@dns-test-service.dns-1284.svc wheezy_tcp@dns-test-service.dns-1284.svc wheezy_udp@_http._tcp.dns-test-service.dns-1284.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1284.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1284 jessie_tcp@dns-test-service.dns-1284 jessie_udp@dns-test-service.dns-1284.svc jessie_tcp@dns-test-service.dns-1284.svc jessie_udp@_http._tcp.dns-test-service.dns-1284.svc jessie_tcp@_http._tcp.dns-test-service.dns-1284.svc] May 13 22:07:31.659: INFO: DNS probes using dns-1284/dns-test-cb732022-0c75-4782-8140-5cc469d3ad99 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:31.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1284" for this suite. • [SLOW TEST:13.378 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":316,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:01.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8966 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-8966 May 13 22:07:01.797: INFO: Found 0 stateful pods, waiting for 1 May 13 22:07:11.802: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:07:11.821: INFO: Deleting all statefulset in ns statefulset-8966 May 13 22:07:11.823: INFO: Scaling statefulset ss to 0 May 13 22:07:31.837: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:07:31.840: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:31.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8966" for this suite. • [SLOW TEST:30.090 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":29,"skipped":513,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:06:35.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:06:35.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:36.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6238" for this suite. • [SLOW TEST:61.312 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":21,"skipped":438,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:21.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:37.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8641" for this suite. • [SLOW TEST:16.109 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":14,"skipped":284,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:05:23.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8915 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8915 I0513 22:05:23.960232 27 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8915, replica count: 2 I0513 22:05:27.011621 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:05:30.013291 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:05:30.013: INFO: Creating new exec pod May 13 22:05:35.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 13 22:05:35.308: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 13 22:05:35.308: INFO: stdout: "externalname-service-nl6sq" May 13 22:05:35.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.106 80' May 13 22:05:35.544: INFO: stderr: "+ nc -v -t -w 2 10.233.45.106 80\n+ echo hostName\nConnection to 10.233.45.106 80 port [tcp/http] succeeded!\n" May 13 22:05:35.544: INFO: stdout: "" May 13 22:05:36.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.106 80' May 13 22:05:36.791: INFO: stderr: "+ nc -v -t -w 2 10.233.45.106 80\n+ echo hostName\nConnection to 10.233.45.106 80 port [tcp/http] succeeded!\n" May 13 22:05:36.791: INFO: stdout: "" May 13 22:05:37.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.106 80' May 13 22:05:37.799: INFO: stderr: "+ nc -v -t -w 2 10.233.45.106 80\n+ echo hostName\nConnection to 10.233.45.106 80 port [tcp/http] succeeded!\n" May 13 22:05:37.799: INFO: stdout: "externalname-service-sjxpv" May 13 22:05:37.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:38.062: INFO: rc: 1 May 13 22:05:38.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:39.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:39.384: INFO: rc: 1 May 13 22:05:39.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:40.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:40.597: INFO: rc: 1 May 13 22:05:40.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:41.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:41.311: INFO: rc: 1 May 13 22:05:41.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:42.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:42.377: INFO: rc: 1 May 13 22:05:42.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:43.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:44.448: INFO: rc: 1 May 13 22:05:44.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:46.118: INFO: rc: 1 May 13 22:05:46.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:47.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:47.320: INFO: rc: 1 May 13 22:05:47.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:48.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:48.319: INFO: rc: 1 May 13 22:05:48.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:49.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:49.368: INFO: rc: 1 May 13 22:05:49.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:50.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:50.326: INFO: rc: 1 May 13 22:05:50.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:51.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:51.319: INFO: rc: 1 May 13 22:05:51.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:52.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:52.317: INFO: rc: 1 May 13 22:05:52.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:53.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:53.309: INFO: rc: 1 May 13 22:05:53.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:54.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:54.310: INFO: rc: 1 May 13 22:05:54.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:55.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:55.356: INFO: rc: 1 May 13 22:05:55.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:56.316: INFO: rc: 1 May 13 22:05:56.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:57.297: INFO: rc: 1 May 13 22:05:57.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:58.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:58.315: INFO: rc: 1 May 13 22:05:58.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:05:59.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:05:59.317: INFO: rc: 1 May 13 22:05:59.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:00.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:00.321: INFO: rc: 1 May 13 22:06:00.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:01.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:01.313: INFO: rc: 1 May 13 22:06:01.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:02.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:02.340: INFO: rc: 1 May 13 22:06:02.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:03.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:03.332: INFO: rc: 1 May 13 22:06:03.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:04.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:04.315: INFO: rc: 1 May 13 22:06:04.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:05.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:05.328: INFO: rc: 1 May 13 22:06:05.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:06.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:06.321: INFO: rc: 1 May 13 22:06:06.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:07.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:07.309: INFO: rc: 1 May 13 22:06:07.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:08.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:08.299: INFO: rc: 1 May 13 22:06:08.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:09.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:09.320: INFO: rc: 1 May 13 22:06:09.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:10.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:10.322: INFO: rc: 1 May 13 22:06:10.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:11.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:11.309: INFO: rc: 1 May 13 22:06:11.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:12.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:12.314: INFO: rc: 1 May 13 22:06:12.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:13.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:14.182: INFO: rc: 1 May 13 22:06:14.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:15.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:15.314: INFO: rc: 1 May 13 22:06:15.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:16.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:16.335: INFO: rc: 1 May 13 22:06:16.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:17.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:17.336: INFO: rc: 1 May 13 22:06:17.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:18.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:18.296: INFO: rc: 1 May 13 22:06:18.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:19.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:19.287: INFO: rc: 1 May 13 22:06:19.287: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:20.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:20.310: INFO: rc: 1 May 13 22:06:20.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:21.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:21.406: INFO: rc: 1 May 13 22:06:21.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:22.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:22.322: INFO: rc: 1 May 13 22:06:22.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:23.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:23.300: INFO: rc: 1 May 13 22:06:23.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:24.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:24.323: INFO: rc: 1 May 13 22:06:24.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:25.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:25.381: INFO: rc: 1 May 13 22:06:25.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:26.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:26.398: INFO: rc: 1 May 13 22:06:26.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:27.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:27.313: INFO: rc: 1 May 13 22:06:27.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:28.327: INFO: rc: 1 May 13 22:06:28.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:29.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:29.496: INFO: rc: 1 May 13 22:06:29.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:30.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:30.692: INFO: rc: 1 May 13 22:06:30.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:31.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:31.341: INFO: rc: 1 May 13 22:06:31.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:32.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:32.318: INFO: rc: 1 May 13 22:06:32.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:33.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:33.323: INFO: rc: 1 May 13 22:06:33.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:34.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:34.348: INFO: rc: 1 May 13 22:06:34.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:35.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:35.324: INFO: rc: 1 May 13 22:06:35.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:36.578: INFO: rc: 1 May 13 22:06:36.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:37.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:37.313: INFO: rc: 1 May 13 22:06:37.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:38.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:38.324: INFO: rc: 1 May 13 22:06:38.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:39.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:39.335: INFO: rc: 1 May 13 22:06:39.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:40.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:40.298: INFO: rc: 1 May 13 22:06:40.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:41.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:41.326: INFO: rc: 1 May 13 22:06:41.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:42.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:42.457: INFO: rc: 1 May 13 22:06:42.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:43.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:43.985: INFO: rc: 1 May 13 22:06:43.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:44.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:44.518: INFO: rc: 1 May 13 22:06:44.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:45.845: INFO: rc: 1 May 13 22:06:45.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:46.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:46.366: INFO: rc: 1 May 13 22:06:46.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:47.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:47.804: INFO: rc: 1 May 13 22:06:47.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:48.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:48.449: INFO: rc: 1 May 13 22:06:48.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:49.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:49.380: INFO: rc: 1 May 13 22:06:49.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:50.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:50.492: INFO: rc: 1 May 13 22:06:50.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:51.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:51.458: INFO: rc: 1 May 13 22:06:51.458: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:52.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:52.393: INFO: rc: 1 May 13 22:06:52.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:53.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:53.967: INFO: rc: 1 May 13 22:06:53.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:54.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:54.343: INFO: rc: 1 May 13 22:06:54.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:55.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:55.347: INFO: rc: 1 May 13 22:06:55.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:56.403: INFO: rc: 1 May 13 22:06:56.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:57.589: INFO: rc: 1 May 13 22:06:57.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:58.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:58.395: INFO: rc: 1 May 13 22:06:58.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:06:59.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:06:59.354: INFO: rc: 1 May 13 22:06:59.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:00.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:00.335: INFO: rc: 1 May 13 22:07:00.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:01.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:01.378: INFO: rc: 1 May 13 22:07:01.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:02.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:02.305: INFO: rc: 1 May 13 22:07:02.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:03.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:03.350: INFO: rc: 1 May 13 22:07:03.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:04.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:04.367: INFO: rc: 1 May 13 22:07:04.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:05.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:05.500: INFO: rc: 1 May 13 22:07:05.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:06.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:06.342: INFO: rc: 1 May 13 22:07:06.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:07.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:07.324: INFO: rc: 1 May 13 22:07:07.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:08.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:08.304: INFO: rc: 1 May 13 22:07:08.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:09.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:09.537: INFO: rc: 1 May 13 22:07:09.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:10.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:10.367: INFO: rc: 1 May 13 22:07:10.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:11.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:11.333: INFO: rc: 1 May 13 22:07:11.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:12.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:12.409: INFO: rc: 1 May 13 22:07:12.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:13.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:14.115: INFO: rc: 1 May 13 22:07:14.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:15.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:15.477: INFO: rc: 1 May 13 22:07:15.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:16.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:16.407: INFO: rc: 1 May 13 22:07:16.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:17.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:17.542: INFO: rc: 1 May 13 22:07:17.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:18.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:18.332: INFO: rc: 1 May 13 22:07:18.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:19.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:19.342: INFO: rc: 1 May 13 22:07:19.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:20.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:20.343: INFO: rc: 1 May 13 22:07:20.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:21.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:21.422: INFO: rc: 1 May 13 22:07:21.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:22.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:22.396: INFO: rc: 1 May 13 22:07:22.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:23.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:23.535: INFO: rc: 1 May 13 22:07:23.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:24.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:24.349: INFO: rc: 1 May 13 22:07:24.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:25.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:25.335: INFO: rc: 1 May 13 22:07:25.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:26.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:26.543: INFO: rc: 1 May 13 22:07:26.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:27.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:27.371: INFO: rc: 1 May 13 22:07:27.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:28.355: INFO: rc: 1 May 13 22:07:28.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:29.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:29.410: INFO: rc: 1 May 13 22:07:29.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:30.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:30.404: INFO: rc: 1 May 13 22:07:30.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:31.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:31.314: INFO: rc: 1 May 13 22:07:31.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:32.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:32.377: INFO: rc: 1 May 13 22:07:32.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:33.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:33.521: INFO: rc: 1 May 13 22:07:33.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:34.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:34.451: INFO: rc: 1 May 13 22:07:34.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:35.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:35.303: INFO: rc: 1 May 13 22:07:35.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:36.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:36.320: INFO: rc: 1 May 13 22:07:36.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:37.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:37.312: INFO: rc: 1 May 13 22:07:37.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:38.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:38.333: INFO: rc: 1 May 13 22:07:38.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:38.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440' May 13 22:07:38.594: INFO: rc: 1 May 13 22:07:38.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8915 exec execpodcnzxk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32440: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32440 nc: connect to 10.10.190.207 port 32440 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... May 13 22:07:38.595: FAIL: Unexpected error: <*errors.errorString | 0xc00580a430>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32440 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32440 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001da4300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001da4300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001da4300, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 13 22:07:38.596: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-8915". STEP: Found 17 events. May 13 22:07:38.623: INFO: At 2022-05-13 22:05:23 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-sjxpv May 13 22:07:38.623: INFO: At 2022-05-13 22:05:23 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-nl6sq May 13 22:07:38.623: INFO: At 2022-05-13 22:05:23 +0000 UTC - event for externalname-service-nl6sq: {default-scheduler } Scheduled: Successfully assigned services-8915/externalname-service-nl6sq to node1 May 13 22:07:38.623: INFO: At 2022-05-13 22:05:23 +0000 UTC - event for externalname-service-sjxpv: {default-scheduler } Scheduled: Successfully assigned services-8915/externalname-service-sjxpv to node2 May 13 22:07:38.623: INFO: At 2022-05-13 22:05:25 +0000 UTC - event for externalname-service-nl6sq: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:07:38.623: INFO: At 2022-05-13 22:05:25 +0000 UTC - event for externalname-service-nl6sq: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 314.408722ms May 13 22:07:38.623: INFO: At 2022-05-13 22:05:25 +0000 UTC - event for externalname-service-nl6sq: {kubelet node1} Created: Created container externalname-service May 13 22:07:38.623: INFO: At 2022-05-13 22:05:26 +0000 UTC - event for externalname-service-nl6sq: {kubelet node1} Started: Started container externalname-service May 13 22:07:38.623: INFO: At 2022-05-13 22:05:26 +0000 UTC - event for externalname-service-sjxpv: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:07:38.624: INFO: At 2022-05-13 22:05:27 +0000 UTC - event for externalname-service-sjxpv: {kubelet node2} Created: Created container externalname-service May 13 22:07:38.624: INFO: At 2022-05-13 22:05:27 +0000 UTC - event for externalname-service-sjxpv: {kubelet node2} Started: Started container externalname-service May 13 22:07:38.624: INFO: At 2022-05-13 22:05:27 +0000 UTC - event for externalname-service-sjxpv: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 330.68382ms May 13 22:07:38.624: INFO: At 2022-05-13 22:05:30 +0000 UTC - event for execpodcnzxk: {default-scheduler } Scheduled: Successfully assigned services-8915/execpodcnzxk to node1 May 13 22:07:38.624: INFO: At 2022-05-13 22:05:31 +0000 UTC - event for execpodcnzxk: {kubelet node1} Created: Created container agnhost-container May 13 22:07:38.624: INFO: At 2022-05-13 22:05:31 +0000 UTC - event for execpodcnzxk: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 13 22:07:38.624: INFO: At 2022-05-13 22:05:31 +0000 UTC - event for execpodcnzxk: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 292.499556ms May 13 22:07:38.624: INFO: At 2022-05-13 22:05:32 +0000 UTC - event for execpodcnzxk: {kubelet node1} Started: Started container agnhost-container May 13 22:07:38.626: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:07:38.626: INFO: execpodcnzxk node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:30 +0000 UTC }] May 13 22:07:38.626: INFO: externalname-service-nl6sq node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:23 +0000 UTC }] May 13 22:07:38.626: INFO: externalname-service-sjxpv node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:05:23 +0000 UTC }] May 13 22:07:38.626: INFO: May 13 22:07:38.631: INFO: Logging node info for node master1 May 13 22:07:38.633: INFO: Node Info: &Node{ObjectMeta:{master1 e893469e-45f9-457b-9379-276178f6209f 47350 0 2022-05-13 19:57:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:07:38.634: INFO: Logging kubelet events for node master1 May 13 22:07:38.636: INFO: Logging pods the kubelet thinks is on node master1 May 13 22:07:38.645: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.645: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:07:38.645: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.645: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:07:38.645: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.645: INFO: Container kube-scheduler ready: true, restart count 0 May 13 22:07:38.645: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:07:38.645: INFO: Init container install-cni ready: true, restart count 2 May 13 22:07:38.645: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:07:38.646: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.646: INFO: Container kube-multus ready: true, restart count 1 May 13 22:07:38.646: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.646: INFO: Container docker-registry ready: true, restart count 0 May 13 22:07:38.646: INFO: Container nginx ready: true, restart count 0 May 13 22:07:38.646: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.646: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:07:38.646: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.646: INFO: Container nfd-controller ready: true, restart count 0 May 13 22:07:38.646: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.646: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:38.646: INFO: Container node-exporter ready: true, restart count 0 May 13 22:07:38.730: INFO: Latency metrics for node master1 May 13 22:07:38.730: INFO: Logging node info for node master2 May 13 22:07:38.733: INFO: Node Info: &Node{ObjectMeta:{master2 6394fb00-7ac6-4b0d-af37-0e7baf892992 47344 0 2022-05-13 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:07:33 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:07:38.733: INFO: Logging kubelet events for node master2 May 13 22:07:38.736: INFO: Logging pods the kubelet thinks is on node master2 May 13 22:07:38.744: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.744: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:07:38.744: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:07:38.744: INFO: Init container install-cni ready: true, restart count 2 May 13 22:07:38.744: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:07:38.744: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.744: INFO: Container kube-multus ready: true, restart count 1 May 13 22:07:38.744: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.745: INFO: Container coredns ready: true, restart count 1 May 13 22:07:38.745: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.745: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:07:38.745: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.745: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:07:38.745: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.745: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:38.745: INFO: Container node-exporter ready: true, restart count 0 May 13 22:07:38.745: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.745: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:07:38.822: INFO: Latency metrics for node master2 May 13 22:07:38.822: INFO: Logging node info for node master3 May 13 22:07:38.826: INFO: Node Info: &Node{ObjectMeta:{master3 11a40d0b-d9d1-449f-a587-cc897edbfd9b 47321 0 2022-05-13 19:58:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:31 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:31 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:31 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:07:31 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:07:38.826: INFO: Logging kubelet events for node master3 May 13 22:07:38.828: INFO: Logging pods the kubelet thinks is on node master3 May 13 22:07:38.835: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:07:38.835: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:07:38.835: INFO: Init container install-cni ready: true, restart count 0 May 13 22:07:38.835: INFO: Container kube-flannel ready: true, restart count 1 May 13 22:07:38.835: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container autoscaler ready: true, restart count 1 May 13 22:07:38.835: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:38.835: INFO: Container node-exporter ready: true, restart count 0 May 13 22:07:38.835: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-controller-manager ready: true, restart count 2 May 13 22:07:38.835: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-scheduler ready: true, restart count 2 May 13 22:07:38.835: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container coredns ready: true, restart count 1 May 13 22:07:38.835: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-apiserver ready: true, restart count 0 May 13 22:07:38.835: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.835: INFO: Container kube-multus ready: true, restart count 1 May 13 22:07:38.938: INFO: Latency metrics for node master3 May 13 22:07:38.938: INFO: Logging node info for node node1 May 13 22:07:38.941: INFO: Node Info: &Node{ObjectMeta:{node1 dca01e5e-a739-4ccc-b102-bfd163c4b832 47414 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:12:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:36 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:36 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:36 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:07:36 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:07:38.942: INFO: Logging kubelet events for node node1 May 13 22:07:38.944: INFO: Logging pods the kubelet thinks is on node node1 May 13 22:07:38.963: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:07:38.963: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:07:38.963: INFO: Init container install-cni ready: true, restart count 2 May 13 22:07:38.963: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:07:38.963: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.963: INFO: Container nodereport ready: true, restart count 0 May 13 22:07:38.963: INFO: Container reconcile ready: true, restart count 0 May 13 22:07:38.963: INFO: externalsvc-vlw4x started at 2022-05-13 22:07:14 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container externalsvc ready: false, restart count 0 May 13 22:07:38.963: INFO: execpodlwnz7 started at 2022-05-13 22:07:20 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:07:38.963: INFO: dns-test-260d529c-0d85-4378-ae47-ffa465dd5192 started at 2022-05-13 22:07:31 +0000 UTC (0+3 container statuses recorded) May 13 22:07:38.963: INFO: Container jessie-querier ready: true, restart count 0 May 13 22:07:38.963: INFO: Container querier ready: true, restart count 0 May 13 22:07:38.963: INFO: Container webserver ready: true, restart count 0 May 13 22:07:38.963: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:07:38.963: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:07:38.963: INFO: Container collectd ready: true, restart count 0 May 13 22:07:38.963: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:07:38.963: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:07:38.963: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container kube-multus ready: true, restart count 1 May 13 22:07:38.963: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:07:38.963: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:07:38.963: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:38.963: INFO: Container node-exporter ready: true, restart count 0 May 13 22:07:38.963: INFO: sample-webhook-deployment-78988fc6cd-n26tc started at 2022-05-13 22:07:25 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container sample-webhook ready: true, restart count 0 May 13 22:07:38.963: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:07:38.963: INFO: externalname-service-nl6sq started at 2022-05-13 22:05:23 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container externalname-service ready: true, restart count 0 May 13 22:07:38.963: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.963: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:07:38.963: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded) May 13 22:07:38.963: INFO: Container discover ready: false, restart count 0 May 13 22:07:38.963: INFO: Container init ready: false, restart count 0 May 13 22:07:38.963: INFO: Container install ready: false, restart count 0 May 13 22:07:38.963: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded) May 13 22:07:38.963: INFO: Container config-reloader ready: true, restart count 0 May 13 22:07:38.963: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:07:38.963: INFO: Container grafana ready: true, restart count 0 May 13 22:07:38.963: INFO: Container prometheus ready: true, restart count 1 May 13 22:07:38.963: INFO: sample-webhook-deployment-78988fc6cd-w8xs7 started at 2022-05-13 22:07:22 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.964: INFO: Container sample-webhook ready: true, restart count 0 May 13 22:07:38.964: INFO: execpodcnzxk started at 2022-05-13 22:05:30 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.964: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:07:38.964: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.964: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:07:38.964: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded) May 13 22:07:38.964: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:07:39.158: INFO: Latency metrics for node node1 May 13 22:07:39.158: INFO: Logging node info for node node2 May 13 22:07:39.162: INFO: Node Info: &Node{ObjectMeta:{node2 461ea6c2-df11-4be4-802e-29bddc0f2535 47240 0 2022-05-13 19:59:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 20:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:30 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:30 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 22:07:30 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 22:07:30 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 22:07:39.163: INFO: Logging kubelet events for node node2 May 13 22:07:39.165: INFO: Logging pods the kubelet thinks is on node node2 May 13 22:07:40.366: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded) May 13 22:07:40.366: INFO: Init container install-cni ready: true, restart count 2 May 13 22:07:40.366: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:07:40.366: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:07:40.366: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded) May 13 22:07:40.366: INFO: Container collectd ready: true, restart count 0 May 13 22:07:40.366: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:07:40.366: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:07:40.366: INFO: externalname-service-sjxpv started at 2022-05-13 22:05:24 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container externalname-service ready: true, restart count 0 May 13 22:07:40.366: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded) May 13 22:07:40.366: INFO: Container nodereport ready: true, restart count 0 May 13 22:07:40.366: INFO: Container reconcile ready: true, restart count 0 May 13 22:07:40.366: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded) May 13 22:07:40.366: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:40.366: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:07:40.366: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:07:40.366: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:07:40.366: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded) May 13 22:07:40.366: INFO: Container discover ready: false, restart count 0 May 13 22:07:40.366: INFO: Container init ready: false, restart count 0 May 13 22:07:40.366: INFO: Container install ready: false, restart count 0 May 13 22:07:40.366: INFO: test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe started at 2022-05-13 22:04:50 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container test-webserver ready: true, restart count 0 May 13 22:07:40.366: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded) May 13 22:07:40.366: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:07:40.366: INFO: Container node-exporter ready: true, restart count 0 May 13 22:07:40.366: INFO: fail-once-local-7z9dv started at 2022-05-13 22:07:31 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container c ready: false, restart count 1 May 13 22:07:40.366: INFO: fail-once-local-b62xn started at 2022-05-13 22:07:35 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container c ready: false, restart count 0 May 13 22:07:40.366: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container kube-multus ready: true, restart count 1 May 13 22:07:40.366: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container tas-extender ready: true, restart count 0 May 13 22:07:40.366: INFO: liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb started at 2022-05-13 22:03:54 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container agnhost-container ready: true, restart count 0 May 13 22:07:40.366: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:07:40.366: INFO: externalsvc-crmt8 started at 2022-05-13 22:07:14 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container externalsvc ready: false, restart count 0 May 13 22:07:40.366: INFO: fail-once-local-pgt2b started at 2022-05-13 22:07:31 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container c ready: false, restart count 1 May 13 22:07:40.366: INFO: fail-once-local-pddsl started at 2022-05-13 22:07:37 +0000 UTC (0+1 container statuses recorded) May 13 22:07:40.366: INFO: Container c ready: false, restart count 0 May 13 22:07:40.914: INFO: Latency metrics for node node2 May 13 22:07:40.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8915" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [137.012 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:38.595: Unexpected error: <*errors.errorString | 0xc00580a430>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32440 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32440 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":31,"skipped":531,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:24.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 13 22:07:25.043: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:07:25.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:07:27.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:07:29.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076445, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:07:32.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:42.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1740" for this suite. STEP: Destroying namespace "webhook-1740-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.368 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":35,"skipped":600,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:14.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8065 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8065 STEP: creating replication controller externalsvc in namespace services-8065 I0513 22:07:14.277955 31 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8065, replica count: 2 I0513 22:07:17.330129 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:07:20.331011 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 13 22:07:20.343: INFO: Creating new exec pod May 13 22:07:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8065 exec execpodlwnz7 -- /bin/sh -x -c nslookup clusterip-service.services-8065.svc.cluster.local' May 13 22:07:26.981: INFO: stderr: "+ nslookup clusterip-service.services-8065.svc.cluster.local\n" May 13 22:07:26.981: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-8065.svc.cluster.local\tcanonical name = externalsvc.services-8065.svc.cluster.local.\nName:\texternalsvc.services-8065.svc.cluster.local\nAddress: 10.233.30.206\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8065, will wait for the garbage collector to delete the pods May 13 22:07:27.040: INFO: Deleting ReplicationController externalsvc took: 4.542942ms May 13 22:07:27.141: INFO: Terminating ReplicationController externalsvc pods took: 101.233826ms May 13 22:07:42.550: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:42.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8065" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.322 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":46,"skipped":771,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:22.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:07:22.745: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:07:24.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:07:26.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076442, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:07:29.765: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:42.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4069" for this suite. STEP: Destroying namespace "webhook-4069-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.473 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":17,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:42.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching May 13 22:07:42.991: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating May 13 22:07:43.005: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-5951" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":18,"skipped":247,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:40.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-052fd52a-6083-44f4-acf9-c23472b02a03 STEP: Creating a pod to test consume configMaps May 13 22:07:40.996: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7" in namespace "projected-4191" to be "Succeeded or Failed" May 13 22:07:40.998: INFO: Pod "pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224149ms May 13 22:07:43.000: INFO: Pod "pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004711391s May 13 22:07:45.006: INFO: Pod "pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010102395s STEP: Saw pod success May 13 22:07:45.006: INFO: Pod "pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7" satisfied condition "Succeeded or Failed" May 13 22:07:45.008: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7 container agnhost-container: STEP: delete the pod May 13 22:07:45.020: INFO: Waiting for pod pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7 to disappear May 13 22:07:45.022: INFO: Pod pod-projected-configmaps-5ad97395-7bab-46dd-9dfa-b706ae6a81d7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:45.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4191" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:37.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:37.051: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2636" for this suite. • [SLOW TEST:8.139 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":22,"skipped":442,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:45.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 13 22:07:45.195: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:45.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7221" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":23,"skipped":443,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:31.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6673.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6673.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6673.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 215.60.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.60.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.60.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.60.215_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6673.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6673.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6673.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6673.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6673.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 215.60.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.60.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.60.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.60.215_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 13 22:07:35.835: INFO: Unable to read wheezy_udp@dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.839: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.842: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.870: INFO: Unable to read jessie_udp@dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.874: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.877: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:35.891: INFO: Lookups using dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192 failed for: [wheezy_udp@dns-test-service.dns-6673.svc.cluster.local wheezy_tcp@dns-test-service.dns-6673.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local jessie_udp@dns-test-service.dns-6673.svc.cluster.local jessie_tcp@dns-test-service.dns-6673.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6673.svc.cluster.local] May 13 22:07:40.896: INFO: Unable to read wheezy_udp@dns-test-service.dns-6673.svc.cluster.local from pod dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192: the server could not find the requested resource (get pods dns-test-260d529c-0d85-4378-ae47-ffa465dd5192) May 13 22:07:40.940: INFO: Lookups using dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192 failed for: [wheezy_udp@dns-test-service.dns-6673.svc.cluster.local] May 13 22:07:45.941: INFO: DNS probes using dns-6673/dns-test-260d529c-0d85-4378-ae47-ffa465dd5192 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:45.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6673" for this suite. • [SLOW TEST:14.207 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":17,"skipped":358,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:45.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:47.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-115" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":24,"skipped":514,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:47.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:47.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3275" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":25,"skipped":541,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:31.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:47.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5521" for this suite. • [SLOW TEST:16.034 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":30,"skipped":516,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:42.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 13 22:07:42.324: INFO: The status of Pod pod-update-10b6d4d1-8152-4ad7-9657-bb68e2f378d2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:44.327: INFO: The status of Pod pod-update-10b6d4d1-8152-4ad7-9657-bb68e2f378d2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:46.330: INFO: The status of Pod pod-update-10b6d4d1-8152-4ad7-9657-bb68e2f378d2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:48.329: INFO: The status of Pod pod-update-10b6d4d1-8152-4ad7-9657-bb68e2f378d2 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 13 22:07:48.843: INFO: Successfully updated pod "pod-update-10b6d4d1-8152-4ad7-9657-bb68e2f378d2" STEP: verifying the updated pod is in kubernetes May 13 22:07:48.848: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:48.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1068" for this suite. • [SLOW TEST:6.567 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":638,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:43.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 13 22:07:43.069: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9081 612b5c5b-544c-447c-a295-ebc7d70cc876 47645 0 2022-05-13 22:07:43 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-13 22:07:43 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-822r7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-822r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 13 22:07:43.073: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:45.076: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:47.077: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:49.079: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 13 22:07:49.079: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9081 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:07:49.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 13 22:07:49.175: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9081 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:07:49.175: INFO: >>> kubeConfig: /root/.kube/config May 13 22:07:49.281: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:49.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9081" for this suite. • [SLOW TEST:6.262 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:42.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:42.622: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 13 22:07:47.626: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 22:07:49.632: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 22:07:49.646: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5964 a9e5436b-9c72-431e-9175-566fa0b5bf1a 47963 1 2022-05-13 22:07:49 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-05-13 22:07:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b49e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 13 22:07:49.649: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-5964 390041a7-615a-4ce2-96fa-29421665a89c 47965 1 2022-05-13 22:07:49 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a9e5436b-9c72-431e-9175-566fa0b5bf1a 0xc006b7c447 0xc006b7c448}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:07:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9e5436b-9c72-431e-9175-566fa0b5bf1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b7c4d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:07:49.650: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 13 22:07:49.650: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5964 5b352217-6b01-418d-a146-73f4a39c1c04 47964 1 2022-05-13 22:07:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a9e5436b-9c72-431e-9175-566fa0b5bf1a 0xc006b7c2b7 0xc006b7c2b8}] [] [{e2e.test Update apps/v1 2022-05-13 22:07:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:07:49 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"a9e5436b-9c72-431e-9175-566fa0b5bf1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006b7c398 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 22:07:49.653: INFO: Pod "test-cleanup-controller-52mcj" is available: &Pod{ObjectMeta:{test-cleanup-controller-52mcj test-cleanup-controller- deployment-5964 b09c06eb-9fb5-493c-ae85-547b607e39f3 47933 0 2022-05-13 22:07:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.150" ], "mac": "26:0c:e9:e8:c8:ad", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.150" ], "mac": "26:0c:e9:e8:c8:ad", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller 5b352217-6b01-418d-a146-73f4a39c1c04 0xc006b7ca07 0xc006b7ca08}] [] [{kube-controller-manager Update v1 2022-05-13 22:07:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b352217-6b01-418d-a146-73f4a39c1c04\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:07:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:07:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhlnl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhlnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.150,StartTime:2022-05-13 22:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:07:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://a7edaae4eafad5f1c49d931dffc68decbf98fa55a9d990f10fc61ca2c31dd794,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:49.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5964" for this suite. • [SLOW TEST:7.067 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":47,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":543,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:45.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 13 22:07:45.066: INFO: Waiting up to 5m0s for pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016" in namespace "downward-api-6878" to be "Succeeded or Failed" May 13 22:07:45.068: INFO: Pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041543ms May 13 22:07:47.071: INFO: Pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004550563s May 13 22:07:49.075: INFO: Pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009326969s May 13 22:07:51.079: INFO: Pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013046124s STEP: Saw pod success May 13 22:07:51.079: INFO: Pod "downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016" satisfied condition "Succeeded or Failed" May 13 22:07:51.084: INFO: Trying to get logs from node node2 pod downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016 container dapi-container: STEP: delete the pod May 13 22:07:51.096: INFO: Waiting for pod downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016 to disappear May 13 22:07:51.098: INFO: Pod downward-api-3e1fc074-8ebe-4a4d-983a-7f9374da7016 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:51.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6878" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":543,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:47.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:47.608: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2d02cccf-dcef-44b7-8812-ebc4391fcf41", Controller:(*bool)(0xc003d1086a), BlockOwnerDeletion:(*bool)(0xc003d1086b)}} May 13 22:07:47.612: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"30f3596d-1bcc-4559-b645-03a6fb17fe7f", Controller:(*bool)(0xc004db291a), BlockOwnerDeletion:(*bool)(0xc004db291b)}} May 13 22:07:47.616: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c28f2f4a-b282-43f0-b19b-e6ff919893fc", Controller:(*bool)(0xc004fd3482), BlockOwnerDeletion:(*bool)(0xc004fd3483)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:52.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7004" for this suite. • [SLOW TEST:5.077 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":26,"skipped":574,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:48.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:07:48.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39" in namespace "downward-api-3412" to be "Succeeded or Failed" May 13 22:07:48.945: INFO: Pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39": Phase="Pending", Reason="", readiness=false. Elapsed: 5.347721ms May 13 22:07:50.949: INFO: Pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008631298s May 13 22:07:52.952: INFO: Pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011915558s May 13 22:07:54.955: INFO: Pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015235621s STEP: Saw pod success May 13 22:07:54.955: INFO: Pod "downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39" satisfied condition "Succeeded or Failed" May 13 22:07:54.957: INFO: Trying to get logs from node node1 pod downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39 container client-container: STEP: delete the pod May 13 22:07:54.981: INFO: Waiting for pod downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39 to disappear May 13 22:07:54.983: INFO: Pod downwardapi-volume-3d09f397-170e-43f2-b349-ae25a6cefb39 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:54.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3412" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":665,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:52.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:07:52.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba" in namespace "downward-api-5973" to be "Succeeded or Failed" May 13 22:07:52.756: INFO: Pod "downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325218ms May 13 22:07:54.761: INFO: Pod "downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007239437s May 13 22:07:56.767: INFO: Pod "downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012696005s STEP: Saw pod success May 13 22:07:56.767: INFO: Pod "downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba" satisfied condition "Succeeded or Failed" May 13 22:07:56.769: INFO: Trying to get logs from node node2 pod downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba container client-container: STEP: delete the pod May 13 22:07:56.826: INFO: Waiting for pod downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba to disappear May 13 22:07:56.828: INFO: Pod downwardapi-volume-8777d1dc-dafa-4591-974b-2108cf4e02ba no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5973" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":616,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:03:54.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb in namespace container-probe-3146 May 13 22:03:58.730: INFO: Started pod liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb in namespace container-probe-3146 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:03:58.733: INFO: Initial restart count of pod liveness-5585cd15-90a0-48e9-86e8-87f63b350bcb is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:07:59.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3146" for this suite. • [SLOW TEST:244.553 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:49.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:00.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-424" for this suite. • [SLOW TEST:11.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:51.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:51.132: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 13 22:07:51.137: INFO: Pod name sample-pod: Found 0 pods out of 1 May 13 22:07:56.142: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 13 22:07:56.142: INFO: Creating deployment "test-rolling-update-deployment" May 13 22:07:56.146: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 13 22:07:56.151: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 13 22:07:58.160: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 13 22:07:58.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:08:00.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076476, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:08:02.167: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 22:08:02.175: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4798 4eaf3976-f10f-4e51-86a4-772e051a7b6e 48433 1 2022-05-13 22:07:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-05-13 22:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:08:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fa3b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-13 22:07:56 +0000 UTC,LastTransitionTime:2022-05-13 22:07:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-05-13 22:08:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 13 22:08:02.178: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-4798 fb7791b9-f30c-40fe-807f-5451dcf089a5 48422 1 2022-05-13 22:07:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4eaf3976-f10f-4e51-86a4-772e051a7b6e 0xc005d31967 0xc005d31968}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:08:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4eaf3976-f10f-4e51-86a4-772e051a7b6e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d319f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 13 22:08:02.178: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 13 22:08:02.178: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4798 0d9a758a-4709-4272-9a1d-ded3786fc94b 48432 2 2022-05-13 22:07:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4eaf3976-f10f-4e51-86a4-772e051a7b6e 0xc005d31857 0xc005d31858}] [] [{e2e.test Update apps/v1 2022-05-13 22:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:08:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4eaf3976-f10f-4e51-86a4-772e051a7b6e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005d318f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:08:02.182: INFO: Pod "test-rolling-update-deployment-585b757574-xzww6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-xzww6 test-rolling-update-deployment-585b757574- deployment-4798 aca6ace2-e5cc-41ce-9b4a-cb8ffd62a0aa 48421 0 2022-05-13 22:07:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.156" ], "mac": "16:f1:fd:09:44:ae", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.156" ], "mac": "16:f1:fd:09:44:ae", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 fb7791b9-f30c-40fe-807f-5451dcf089a5 0xc00437348f 0xc0043734a0}] [] [{kube-controller-manager Update v1 2022-05-13 22:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb7791b9-f30c-40fe-807f-5451dcf089a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-13 22:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-13 22:08:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.156\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-749pg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-749pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.156,StartTime:2022-05-13 22:07:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-13 22:07:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://2505b6de3bd13cb5cf95f13a6bd827ba103e35e51d287e12ea498e3889bf6297,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:02.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4798" for this suite. • [SLOW TEST:11.080 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":34,"skipped":544,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:55.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 13 22:07:55.631: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 13 22:07:57.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:07:59.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076475, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 13 22:08:02.652: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:02.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-768" for this suite. STEP: Destroying namespace "webhook-768-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.676 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":38,"skipped":674,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:59.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:07:59.324: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 13 22:07:59.345: INFO: The status of Pod pod-logs-websocket-b7a19223-29d6-4895-8cc6-77155f98f508 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:01.348: INFO: The status of Pod pod-logs-websocket-b7a19223-29d6-4895-8cc6-77155f98f508 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:03.351: INFO: The status of Pod pod-logs-websocket-b7a19223-29d6-4895-8cc6-77155f98f508 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:05.349: INFO: The status of Pod pod-logs-websocket-b7a19223-29d6-4895-8cc6-77155f98f508 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:05.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5336" for this suite. • [SLOW TEST:6.082 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":192,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:37.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:05.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9899" for this suite. • [SLOW TEST:28.078 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":15,"skipped":291,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":48,"skipped":860,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:00.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:08:00.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92" in namespace "projected-1111" to be "Succeeded or Failed" May 13 22:08:00.917: INFO: Pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92": Phase="Pending", Reason="", readiness=false. Elapsed: 5.17821ms May 13 22:08:02.920: INFO: Pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007945287s May 13 22:08:04.923: INFO: Pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010757706s May 13 22:08:06.927: INFO: Pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014970237s STEP: Saw pod success May 13 22:08:06.927: INFO: Pod "downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92" satisfied condition "Succeeded or Failed" May 13 22:08:06.930: INFO: Trying to get logs from node node2 pod downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92 container client-container: STEP: delete the pod May 13 22:08:06.942: INFO: Waiting for pod downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92 to disappear May 13 22:08:06.944: INFO: Pod downwardapi-volume-e75f2a8b-d79e-466c-b12f-9f51da9d8a92 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:06.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1111" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":860,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:02.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:02.237: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1413" for this suite. • [SLOW TEST:5.566 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":35,"skipped":553,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:07.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 13 22:08:08.202: INFO: starting watch STEP: patching STEP: updating May 13 22:08:08.209: INFO: waiting for watch events with expected annotations May 13 22:08:08.209: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:08.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-3282" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":36,"skipped":567,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:02.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-a939f67f-fbf6-4941-9ed2-efa3c7390f74 STEP: Creating a pod to test consume secrets May 13 22:08:02.805: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e" in namespace "projected-7938" to be "Succeeded or Failed" May 13 22:08:02.808: INFO: Pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270532ms May 13 22:08:04.811: INFO: Pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005328652s May 13 22:08:06.815: INFO: Pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009479487s May 13 22:08:08.819: INFO: Pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013591616s STEP: Saw pod success May 13 22:08:08.819: INFO: Pod "pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e" satisfied condition "Succeeded or Failed" May 13 22:08:08.821: INFO: Trying to get logs from node node2 pod pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e container projected-secret-volume-test: STEP: delete the pod May 13 22:08:08.834: INFO: Waiting for pod pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e to disappear May 13 22:08:08.836: INFO: Pod pod-projected-secrets-c6f5d096-ee84-43d9-8373-ceeefb07003e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:08.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7938" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":713,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:56.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 13 22:08:05.385: INFO: Successfully updated pod "adopt-release-fdck6" STEP: Checking that the Job readopts the Pod May 13 22:08:05.385: INFO: Waiting up to 15m0s for pod "adopt-release-fdck6" in namespace "job-8929" to be "adopted" May 13 22:08:05.387: INFO: Pod "adopt-release-fdck6": Phase="Running", Reason="", readiness=true. Elapsed: 2.078315ms May 13 22:08:07.395: INFO: Pod "adopt-release-fdck6": Phase="Running", Reason="", readiness=true. Elapsed: 2.009278987s May 13 22:08:07.395: INFO: Pod "adopt-release-fdck6" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 13 22:08:07.903: INFO: Successfully updated pod "adopt-release-fdck6" STEP: Checking that the Job releases the Pod May 13 22:08:07.903: INFO: Waiting up to 15m0s for pod "adopt-release-fdck6" in namespace "job-8929" to be "released" May 13 22:08:07.906: INFO: Pod "adopt-release-fdck6": Phase="Running", Reason="", readiness=true. Elapsed: 2.282009ms May 13 22:08:09.909: INFO: Pod "adopt-release-fdck6": Phase="Running", Reason="", readiness=true. Elapsed: 2.005372754s May 13 22:08:09.909: INFO: Pod "adopt-release-fdck6" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:09.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8929" for this suite. • [SLOW TEST:13.069 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":28,"skipped":620,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:10.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:10.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7406" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":29,"skipped":698,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:05.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 13 22:08:05.433: INFO: Pod name pod-release: Found 0 pods out of 1 May 13 22:08:10.436: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:11.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8286" for this suite. • [SLOW TEST:6.058 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":14,"skipped":199,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:08.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:08.949: INFO: The status of Pod busybox-scheduling-5640c4da-caa7-4367-b146-6ff3577110f2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:10.952: INFO: The status of Pod busybox-scheduling-5640c4da-caa7-4367-b146-6ff3577110f2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:12.953: INFO: The status of Pod busybox-scheduling-5640c4da-caa7-4367-b146-6ff3577110f2 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:14.952: INFO: The status of Pod busybox-scheduling-5640c4da-caa7-4367-b146-6ff3577110f2 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:14.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3711" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":754,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:05.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:05.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 create -f -' May 13 22:08:06.032: INFO: stderr: "" May 13 22:08:06.032: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 13 22:08:06.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 create -f -' May 13 22:08:06.376: INFO: stderr: "" May 13 22:08:06.376: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 13 22:08:07.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:07.380: INFO: Found 0 / 1 May 13 22:08:08.378: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:08.378: INFO: Found 0 / 1 May 13 22:08:09.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:09.380: INFO: Found 0 / 1 May 13 22:08:10.379: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:10.379: INFO: Found 0 / 1 May 13 22:08:11.381: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:11.381: INFO: Found 0 / 1 May 13 22:08:12.381: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:12.381: INFO: Found 0 / 1 May 13 22:08:13.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:13.380: INFO: Found 0 / 1 May 13 22:08:14.379: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:14.379: INFO: Found 0 / 1 May 13 22:08:15.379: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:15.379: INFO: Found 0 / 1 May 13 22:08:16.379: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:16.379: INFO: Found 0 / 1 May 13 22:08:17.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:17.380: INFO: Found 0 / 1 May 13 22:08:18.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:18.380: INFO: Found 0 / 1 May 13 22:08:19.380: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:19.380: INFO: Found 1 / 1 May 13 22:08:19.380: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 13 22:08:19.382: INFO: Selector matched 1 pods for map[app:agnhost] May 13 22:08:19.382: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 13 22:08:19.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 describe pod agnhost-primary-8f5vh' May 13 22:08:19.585: INFO: stderr: "" May 13 22:08:19.586: INFO: stdout: "Name: agnhost-primary-8f5vh\nNamespace: kubectl-4611\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 13 May 2022 22:08:06 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.163\"\n ],\n \"mac\": \"0a:c3:77:9c:1f:30\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.163\"\n ],\n \"mac\": \"0a:c3:77:9c:1f:30\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.163\nIPs:\n IP: 10.244.4.163\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://9c919546c90eb8eec5757350a20f2b01917c8e18edb92c4cc146662b7a4f65e8\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 13 May 2022 22:08:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7jgxg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-7jgxg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13s default-scheduler Successfully assigned kubectl-4611/agnhost-primary-8f5vh to node2\n Normal Pulling 9s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 9s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 279.807644ms\n Normal Created 9s kubelet Created container agnhost-primary\n Normal Started 8s kubelet Started container agnhost-primary\n" May 13 22:08:19.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 describe rc agnhost-primary' May 13 22:08:19.796: INFO: stderr: "" May 13 22:08:19.796: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4611\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replication-controller Created pod: agnhost-primary-8f5vh\n" May 13 22:08:19.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 describe service agnhost-primary' May 13 22:08:19.956: INFO: stderr: "" May 13 22:08:19.956: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4611\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.61.65\nIPs: 10.233.61.65\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.163:6379\nSession Affinity: None\nEvents: \n" May 13 22:08:19.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 describe node master1' May 13 22:08:20.163: INFO: stderr: "" May 13 22:08:20.163: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n nfd.node.kubernetes.io/master.version: v0.8.2\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 May 2022 19:57:36 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 13 May 2022 22:08:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 13 May 2022 20:03:19 +0000 Fri, 13 May 2022 20:03:19 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 13 May 2022 22:08:14 +0000 Fri, 13 May 2022 19:57:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 13 May 2022 22:08:14 +0000 Fri, 13 May 2022 19:57:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 13 May 2022 22:08:14 +0000 Fri, 13 May 2022 19:57:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 13 May 2022 22:08:14 +0000 Fri, 13 May 2022 20:03:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518304Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629472Ki\n pods: 110\nSystem Info:\n Machine ID: 5bc4f1fb629f4c3bb455995355cca59c\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 196d75bb-273f-44bf-9b96-1cfef0d34445\n Kernel Version: 3.10.0-1160.62.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.16\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-gqdgz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 123m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 120m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 129m\n kube-system kube-flannel-jw4mp 150m (0%) 300m (0%) 64M (0%) 500M (0%) 127m\n kube-system kube-multus-ds-amd64-ts4fz 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 127m\n kube-system kube-proxy-6q994 0 (0%) 0 (0%) 0 (0%) 0 (0%) 128m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n kube-system node-feature-discovery-controller-cff799f9f-k2qmv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n monitoring node-exporter-2jxfg 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 114m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 912m (1%) 670m (0%)\n memory 368087040 (0%) 825058560 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 13 22:08:20.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4611 describe namespace kubectl-4611' May 13 22:08:20.335: INFO: stderr: "" May 13 22:08:20.335: INFO: stdout: "Name: kubectl-4611\nLabels: e2e-framework=kubectl\n e2e-run=bdd62b91-514c-484c-a696-43e4b6604364\n kubernetes.io/metadata.name=kubectl-4611\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:20.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4611" for this suite. • [SLOW TEST:14.731 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":16,"skipped":318,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:20.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:20.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6771" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":17,"skipped":326,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:10.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics May 13 22:08:20.207: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 13 22:08:20.565: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 13 22:08:20.565: INFO: Deleting pod "simpletest-rc-to-be-deleted-cmzqq" in namespace "gc-7489" May 13 22:08:20.571: INFO: Deleting pod "simpletest-rc-to-be-deleted-dq6z2" in namespace "gc-7489" May 13 22:08:20.576: INFO: Deleting pod "simpletest-rc-to-be-deleted-g58zq" in namespace "gc-7489" May 13 22:08:20.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-gbdjb" in namespace "gc-7489" May 13 22:08:20.587: INFO: Deleting pod "simpletest-rc-to-be-deleted-ghn6p" in namespace "gc-7489" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7489" for this suite. • [SLOW TEST:10.482 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":30,"skipped":707,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:06.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:06.999: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 13 22:08:15.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1434 --namespace=crd-publish-openapi-1434 create -f -' May 13 22:08:16.190: INFO: stderr: "" May 13 22:08:16.190: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 13 22:08:16.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1434 --namespace=crd-publish-openapi-1434 delete e2e-test-crd-publish-openapi-6889-crds test-cr' May 13 22:08:16.373: INFO: stderr: "" May 13 22:08:16.373: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 13 22:08:16.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1434 --namespace=crd-publish-openapi-1434 apply -f -' May 13 22:08:16.734: INFO: stderr: "" May 13 22:08:16.734: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 13 22:08:16.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1434 --namespace=crd-publish-openapi-1434 delete e2e-test-crd-publish-openapi-6889-crds test-cr' May 13 22:08:16.884: INFO: stderr: "" May 13 22:08:16.885: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 13 22:08:16.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1434 explain e2e-test-crd-publish-openapi-6889-crds' May 13 22:08:17.249: INFO: stderr: "" May 13 22:08:17.249: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6889-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1434" for this suite. • [SLOW TEST:13.916 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":50,"skipped":871,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:08.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-a0f9f1e3-47a4-46e4-bb6e-fe5a9881c31d STEP: Creating a pod to test consume secrets May 13 22:08:08.354: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97" in namespace "projected-7199" to be "Succeeded or Failed" May 13 22:08:08.356: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.548226ms May 13 22:08:10.360: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006160852s May 13 22:08:12.363: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009285337s May 13 22:08:14.367: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013285386s May 13 22:08:16.372: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017949193s May 13 22:08:18.375: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020773244s May 13 22:08:20.379: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02496055s May 13 22:08:22.385: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030715037s May 13 22:08:24.388: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.034486266s STEP: Saw pod success May 13 22:08:24.388: INFO: Pod "pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97" satisfied condition "Succeeded or Failed" May 13 22:08:24.391: INFO: Trying to get logs from node node2 pod pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97 container secret-volume-test: STEP: delete the pod May 13 22:08:24.405: INFO: Waiting for pod pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97 to disappear May 13 22:08:24.408: INFO: Pod pod-projected-secrets-7ebef413-1b20-44ad-b6b5-e38d8bfe9e97 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:24.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7199" for this suite. • [SLOW TEST:16.099 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":593,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:20.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:20.526: INFO: Creating deployment "test-recreate-deployment" May 13 22:08:20.529: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 13 22:08:20.534: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 13 22:08:22.540: INFO: Waiting deployment "test-recreate-deployment" to complete May 13 22:08:22.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076500, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076500, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076500, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788076500, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 22:08:24.546: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 13 22:08:24.552: INFO: Updating deployment test-recreate-deployment May 13 22:08:24.552: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 13 22:08:24.591: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2323 166334a7-3afc-4525-a0b8-4c6c7792f218 49312 2 2022-05-13 22:08:20 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bbc0c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-13 22:08:24 +0000 UTC,LastTransitionTime:2022-05-13 22:08:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-05-13 22:08:24 +0000 UTC,LastTransitionTime:2022-05-13 22:08:20 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 13 22:08:24.594: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2323 b48a588c-782a-4f91-88de-1168acbbe5fe 49311 1 2022-05-13 22:08:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 166334a7-3afc-4525-a0b8-4c6c7792f218 0xc004bbc540 0xc004bbc541}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"166334a7-3afc-4525-a0b8-4c6c7792f218\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bbc5b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:08:24.594: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 13 22:08:24.594: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-2323 c813a489-f72e-4468-859c-c14df952edd8 49300 2 2022-05-13 22:08:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 166334a7-3afc-4525-a0b8-4c6c7792f218 0xc004bbc447 0xc004bbc448}] [] [{kube-controller-manager Update apps/v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"166334a7-3afc-4525-a0b8-4c6c7792f218\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bbc4d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 13 22:08:24.597: INFO: Pod "test-recreate-deployment-85d47dcb4-rtqtp" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-rtqtp test-recreate-deployment-85d47dcb4- deployment-2323 abcbbd10-8930-46bd-994b-940a3752a359 49313 0 2022-05-13 22:08:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 b48a588c-782a-4f91-88de-1168acbbe5fe 0xc004bbc9ef 0xc004bbca00}] [] [{kube-controller-manager Update v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b48a588c-782a-4f91-88de-1168acbbe5fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-13 22:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rhqgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhqgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-13 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-13 22:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:24.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2323" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":18,"skipped":360,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:24.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:24.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-649" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":19,"skipped":360,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:20.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:08:20.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14" in namespace "projected-1212" to be "Succeeded or Failed" May 13 22:08:20.655: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210854ms May 13 22:08:22.659: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005846208s May 13 22:08:24.663: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009300311s May 13 22:08:26.666: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012724801s May 13 22:08:28.669: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016033467s STEP: Saw pod success May 13 22:08:28.669: INFO: Pod "downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14" satisfied condition "Succeeded or Failed" May 13 22:08:28.671: INFO: Trying to get logs from node node2 pod downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14 container client-container: STEP: delete the pod May 13 22:08:29.002: INFO: Waiting for pod downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14 to disappear May 13 22:08:29.005: INFO: Pod downwardapi-volume-1b3f38e0-e596-4732-958c-a75d2fd25a14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:29.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1212" for this suite. • [SLOW TEST:8.391 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":721,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:24.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 13 22:08:24.502: INFO: Waiting up to 5m0s for pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1" in namespace "emptydir-1482" to be "Succeeded or Failed" May 13 22:08:24.504: INFO: Pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.644357ms May 13 22:08:26.507: INFO: Pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004603667s May 13 22:08:28.512: INFO: Pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009309957s May 13 22:08:30.518: INFO: Pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015346988s STEP: Saw pod success May 13 22:08:30.518: INFO: Pod "pod-71c2d590-de96-46ff-b559-11ab1a173bc1" satisfied condition "Succeeded or Failed" May 13 22:08:30.521: INFO: Trying to get logs from node node2 pod pod-71c2d590-de96-46ff-b559-11ab1a173bc1 container test-container: STEP: delete the pod May 13 22:08:30.537: INFO: Waiting for pod pod-71c2d590-de96-46ff-b559-11ab1a173bc1 to disappear May 13 22:08:30.539: INFO: Pod pod-71c2d590-de96-46ff-b559-11ab1a173bc1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1482" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":623,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:24.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-8b8f127a-c38a-494f-8bc3-607b9ee57e99 STEP: Creating a pod to test consume configMaps May 13 22:08:24.687: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234" in namespace "projected-4233" to be "Succeeded or Failed" May 13 22:08:24.689: INFO: Pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417448ms May 13 22:08:26.694: INFO: Pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006826536s May 13 22:08:28.700: INFO: Pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012915922s May 13 22:08:30.704: INFO: Pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01684122s STEP: Saw pod success May 13 22:08:30.704: INFO: Pod "pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234" satisfied condition "Succeeded or Failed" May 13 22:08:30.706: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234 container agnhost-container: STEP: delete the pod May 13 22:08:30.748: INFO: Waiting for pod pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234 to disappear May 13 22:08:30.750: INFO: Pod pod-projected-configmaps-3144742d-45cb-424f-8403-47510f7c2234 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:30.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4233" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":364,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:29.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 22:08:32.079: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:32.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8657" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":729,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:30.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:08:30.819: INFO: The status of Pod pod-secrets-87e03197-484f-4eb6-99a8-c1bc4dee579b is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:32.822: INFO: The status of Pod pod-secrets-87e03197-484f-4eb6-99a8-c1bc4dee579b is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:34.822: INFO: The status of Pod pod-secrets-87e03197-484f-4eb6-99a8-c1bc4dee579b is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:34.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-473" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":21,"skipped":374,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:32.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 13 22:08:32.159: INFO: Waiting up to 5m0s for pod "downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4" in namespace "downward-api-3211" to be "Succeeded or Failed" May 13 22:08:32.161: INFO: Pod "downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049154ms May 13 22:08:34.164: INFO: Pod "downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005244852s May 13 22:08:36.168: INFO: Pod "downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00939453s STEP: Saw pod success May 13 22:08:36.168: INFO: Pod "downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4" satisfied condition "Succeeded or Failed" May 13 22:08:36.171: INFO: Trying to get logs from node node2 pod downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4 container dapi-container: STEP: delete the pod May 13 22:08:36.183: INFO: Waiting for pod downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4 to disappear May 13 22:08:36.186: INFO: Pod downward-api-4d18be18-bc94-4903-991d-9dac9bed40e4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:36.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3211" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":742,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:30.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:36.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1017" for this suite. • [SLOW TEST:5.807 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":39,"skipped":688,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:36.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token May 13 22:08:37.058: INFO: created pod pod-service-account-defaultsa May 13 22:08:37.058: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 13 22:08:37.067: INFO: created pod pod-service-account-mountsa May 13 22:08:37.067: INFO: pod pod-service-account-mountsa service account token volume mount: true May 13 22:08:37.077: INFO: created pod pod-service-account-nomountsa May 13 22:08:37.077: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 13 22:08:37.086: INFO: created pod pod-service-account-defaultsa-mountspec May 13 22:08:37.087: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 13 22:08:37.096: INFO: created pod pod-service-account-mountsa-mountspec May 13 22:08:37.096: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 13 22:08:37.106: INFO: created pod pod-service-account-nomountsa-mountspec May 13 22:08:37.106: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 13 22:08:37.115: INFO: created pod pod-service-account-defaultsa-nomountspec May 13 22:08:37.115: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 13 22:08:37.124: INFO: created pod pod-service-account-mountsa-nomountspec May 13 22:08:37.124: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 13 22:08:37.199: INFO: created pod pod-service-account-nomountsa-nomountspec May 13 22:08:37.199: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:37.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5896" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":40,"skipped":718,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:37.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:37.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6186" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":761,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:36.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:08:36.241: INFO: Waiting up to 5m0s for pod "downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af" in namespace "downward-api-7735" to be "Succeeded or Failed" May 13 22:08:36.243: INFO: Pod "downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571083ms May 13 22:08:38.247: INFO: Pod "downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005917609s May 13 22:08:40.251: INFO: Pod "downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010230723s STEP: Saw pod success May 13 22:08:40.251: INFO: Pod "downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af" satisfied condition "Succeeded or Failed" May 13 22:08:40.253: INFO: Trying to get logs from node node1 pod downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af container client-container: STEP: delete the pod May 13 22:08:40.275: INFO: Waiting for pod downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af to disappear May 13 22:08:40.278: INFO: Pod downwardapi-volume-155497c3-8cd0-47c0-9572-cc59341f98af no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:40.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7735" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":748,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:14.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7012 STEP: creating service affinity-clusterip in namespace services-7012 STEP: creating replication controller affinity-clusterip in namespace services-7012 I0513 22:08:15.021007 23 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-7012, replica count: 3 I0513 22:08:18.073350 23 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:08:21.073619 23 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:08:24.074387 23 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:08:27.075503 23 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:08:27.082: INFO: Creating new exec pod May 13 22:08:32.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7012 exec execpod-affinitys8ct2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' May 13 22:08:32.401: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 13 22:08:32.402: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:08:32.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7012 exec execpod-affinitys8ct2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.21.105 80' May 13 22:08:32.656: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.21.105 80\nConnection to 10.233.21.105 80 port [tcp/http] succeeded!\n" May 13 22:08:32.656: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:08:32.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7012 exec execpod-affinitys8ct2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.21.105:80/ ; done' May 13 22:08:32.948: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.105:80/\n" May 13 22:08:32.948: INFO: stdout: "\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll\naffinity-clusterip-6dcll" May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Received response from host: affinity-clusterip-6dcll May 13 22:08:32.948: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-7012, will wait for the garbage collector to delete the pods May 13 22:08:33.014: INFO: Deleting ReplicationController affinity-clusterip took: 4.346832ms May 13 22:08:33.114: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.516837ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:41.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7012" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:26.445 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":41,"skipped":765,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:37.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 13 22:08:37.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8" in namespace "projected-7058" to be "Succeeded or Failed" May 13 22:08:37.384: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735966ms May 13 22:08:39.389: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008090666s May 13 22:08:41.393: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011323885s May 13 22:08:43.396: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014512632s May 13 22:08:45.400: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018652324s STEP: Saw pod success May 13 22:08:45.400: INFO: Pod "downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8" satisfied condition "Succeeded or Failed" May 13 22:08:45.402: INFO: Trying to get logs from node node1 pod downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8 container client-container: STEP: delete the pod May 13 22:08:45.416: INFO: Waiting for pod downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8 to disappear May 13 22:08:45.418: INFO: Pod downwardapi-volume-326ca15d-5968-4942-8509-cea6401270d8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:45.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7058" for this suite. • [SLOW TEST:8.077 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":769,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:11.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-zjwl STEP: Creating a pod to test atomic-volume-subpath May 13 22:08:11.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zjwl" in namespace "subpath-1004" to be "Succeeded or Failed" May 13 22:08:11.517: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121999ms May 13 22:08:13.519: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004773239s May 13 22:08:15.524: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009397121s May 13 22:08:17.526: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011936699s May 13 22:08:19.530: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015634083s May 13 22:08:21.534: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019793128s May 13 22:08:23.539: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024523573s May 13 22:08:25.543: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 14.028505484s May 13 22:08:27.546: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 16.031891678s May 13 22:08:29.550: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 18.035262644s May 13 22:08:31.554: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 20.039675936s May 13 22:08:33.558: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 22.043338133s May 13 22:08:35.562: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 24.047884647s May 13 22:08:37.565: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 26.050706318s May 13 22:08:39.569: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 28.05479878s May 13 22:08:41.573: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 30.058527889s May 13 22:08:43.578: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 32.063305861s May 13 22:08:45.582: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 34.067220184s May 13 22:08:47.584: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Running", Reason="", readiness=true. Elapsed: 36.069864493s May 13 22:08:49.588: INFO: Pod "pod-subpath-test-downwardapi-zjwl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.073646875s STEP: Saw pod success May 13 22:08:49.588: INFO: Pod "pod-subpath-test-downwardapi-zjwl" satisfied condition "Succeeded or Failed" May 13 22:08:49.590: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-zjwl container test-container-subpath-downwardapi-zjwl: STEP: delete the pod May 13 22:08:49.602: INFO: Waiting for pod pod-subpath-test-downwardapi-zjwl to disappear May 13 22:08:49.604: INFO: Pod pod-subpath-test-downwardapi-zjwl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zjwl May 13 22:08:49.604: INFO: Deleting pod "pod-subpath-test-downwardapi-zjwl" in namespace "subpath-1004" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:49.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1004" for this suite. • [SLOW TEST:38.135 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:34.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 13 22:08:34.891: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:36.895: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:38.899: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:40.895: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:42.896: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:44.897: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:46.897: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:48.898: INFO: The status of Pod annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd is Running (Ready = true) May 13 22:08:49.418: INFO: Successfully updated pod "annotationupdateebaedaec-2029-4120-b6f7-2575f34d73dd" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:51.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4227" for this suite. • [SLOW TEST:16.688 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":375,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:49.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 13 22:08:49.683: INFO: Waiting up to 5m0s for pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d" in namespace "downward-api-3288" to be "Succeeded or Failed" May 13 22:08:49.685: INFO: Pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165412ms May 13 22:08:51.688: INFO: Pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005577427s May 13 22:08:53.692: INFO: Pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009891428s May 13 22:08:55.698: INFO: Pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014935803s STEP: Saw pod success May 13 22:08:55.698: INFO: Pod "downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d" satisfied condition "Succeeded or Failed" May 13 22:08:55.701: INFO: Trying to get logs from node node1 pod downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d container dapi-container: STEP: delete the pod May 13 22:08:55.714: INFO: Waiting for pod downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d to disappear May 13 22:08:55.716: INFO: Pod downward-api-08456304-15d3-4eb5-b1ba-4ce07145da2d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:55.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3288" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":228,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:40.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 13 22:08:40.326: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:42.330: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:44.329: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:46.332: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 13 22:08:46.348: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:48.351: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:50.351: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 13 22:08:50.365: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:08:50.367: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:08:52.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:08:52.374: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:08:54.368: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:08:54.371: INFO: Pod pod-with-poststart-http-hook still exists May 13 22:08:56.369: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 13 22:08:56.373: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:56.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1614" for this suite. • [SLOW TEST:16.087 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":751,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ May 13 22:08:56.403: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:04:50.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe in namespace container-probe-5341 May 13 22:04:56.733: INFO: Started pod test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe in namespace container-probe-5341 STEP: checking the pod's current state and verifying that restartCount is present May 13 22:04:56.735: INFO: Initial restart count of pod test-webserver-8b5a9ac9-774e-4a34-a641-b2827a4a5abe is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:08:57.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5341" for this suite. • [SLOW TEST:246.562 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":517,"failed":0} May 13 22:08:57.250: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running May 13 22:08:53.599: INFO: running pods: 0 < 1 May 13 22:08:55.605: INFO: running pods: 0 < 1 May 13 22:08:57.602: INFO: running pods: 0 < 1 May 13 22:08:59.603: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:09:01.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9053" for this suite. • [SLOW TEST:10.086 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":23,"skipped":378,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} May 13 22:09:01.638: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:20.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 13 22:08:24.944: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3 PodName:var-expansion-f768ffed-847e-40de-8d17-8a778bed4aa4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:08:24.944: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 13 22:08:25.062: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3 PodName:var-expansion-f768ffed-847e-40de-8d17-8a778bed4aa4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:08:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 13 22:08:25.659: INFO: Successfully updated pod "var-expansion-f768ffed-847e-40de-8d17-8a778bed4aa4" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 13 22:08:25.662: INFO: Deleting pod "var-expansion-f768ffed-847e-40de-8d17-8a778bed4aa4" in namespace "var-expansion-3" May 13 22:08:25.666: INFO: Wait up to 5m0s for pod "var-expansion-f768ffed-847e-40de-8d17-8a778bed4aa4" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:09:03.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3" for this suite. • [SLOW TEST:42.786 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":51,"skipped":873,"failed":0} May 13 22:09:03.686: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:55.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition May 13 22:08:55.773: INFO: Waiting up to 5m0s for pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47" in namespace "var-expansion-946" to be "Succeeded or Failed" May 13 22:08:55.777: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304219ms May 13 22:08:57.780: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006851553s May 13 22:08:59.784: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011041381s May 13 22:09:01.789: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015990256s May 13 22:09:03.793: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019904262s STEP: Saw pod success May 13 22:09:03.793: INFO: Pod "var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47" satisfied condition "Succeeded or Failed" May 13 22:09:03.796: INFO: Trying to get logs from node node2 pod var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47 container dapi-container: STEP: delete the pod May 13 22:09:03.810: INFO: Waiting for pod var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47 to disappear May 13 22:09:03.812: INFO: Pod var-expansion-86ca441a-6c75-4382-8b37-f7d111432e47 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:09:03.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-946" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":235,"failed":0} May 13 22:09:03.822: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:45.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5387 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 22:08:45.494: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 22:08:45.526: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:47.529: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 22:08:49.530: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:08:51.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:08:53.530: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:08:55.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:08:57.530: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:08:59.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:09:01.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:09:03.533: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:09:05.531: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 22:09:07.531: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 22:09:07.536: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 22:09:11.561: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 13 22:09:11.561: INFO: Breadth first check of 10.244.3.67 on host 10.10.190.207... May 13 22:09:11.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.190:9080/dial?request=hostname&protocol=http&host=10.244.3.67&port=8080&tries=1'] Namespace:pod-network-test-5387 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:09:11.564: INFO: >>> kubeConfig: /root/.kube/config May 13 22:09:11.657: INFO: Waiting for responses: map[] May 13 22:09:11.657: INFO: reached 10.244.3.67 after 0/1 tries May 13 22:09:11.657: INFO: Breadth first check of 10.244.4.187 on host 10.10.190.208... May 13 22:09:11.660: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.190:9080/dial?request=hostname&protocol=http&host=10.244.4.187&port=8080&tries=1'] Namespace:pod-network-test-5387 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 22:09:11.660: INFO: >>> kubeConfig: /root/.kube/config May 13 22:09:11.744: INFO: Waiting for responses: map[] May 13 22:09:11.744: INFO: reached 10.244.4.187 after 0/1 tries May 13 22:09:11.744: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:09:11.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5387" for this suite. • [SLOW TEST:26.284 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":795,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} May 13 22:09:11.759: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:08:41.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted May 13 22:09:16.547: INFO: EndpointSlice for Service endpointslice-6533/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:09:26.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-6533" for this suite. • [SLOW TEST:45.126 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":42,"skipped":768,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} May 13 22:09:26.569: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":19,"skipped":250,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:49.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4799 May 13 22:07:49.335: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:51.339: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 22:07:53.338: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 13 22:07:53.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 13 22:07:53.903: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 13 22:07:53.903: INFO: stdout: "iptables" May 13 22:07:53.903: INFO: proxyMode: iptables May 13 22:07:53.909: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 13 22:07:53.911: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-4799 STEP: creating replication controller affinity-clusterip-timeout in namespace services-4799 I0513 22:07:53.921497 39 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4799, replica count: 3 I0513 22:07:56.972369 39 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 22:07:59.972774 39 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 22:07:59.978: INFO: Creating new exec pod May 13 22:08:04.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' May 13 22:08:05.252: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-timeout 80\n+ echo hostName\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 13 22:08:05.252: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:08:05.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.73 80' May 13 22:08:05.481: INFO: stderr: "+ nc -v -t -w 2 10.233.34.73 80\nConnection to 10.233.34.73 80 port [tcp/http] succeeded!\n+ echo hostName\n" May 13 22:08:05.481: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 22:08:05.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.73:80/ ; done' May 13 22:08:05.764: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:08:05.764: INFO: stdout: "\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6\naffinity-clusterip-timeout-c9bf6" May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Received response from host: affinity-clusterip-timeout-c9bf6 May 13 22:08:05.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:08:06.012: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:08:06.012: INFO: stdout: "affinity-clusterip-timeout-c9bf6" May 13 22:08:26.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:08:26.758: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:08:26.759: INFO: stdout: "affinity-clusterip-timeout-c9bf6" May 13 22:08:46.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:08:47.049: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:08:47.049: INFO: stdout: "affinity-clusterip-timeout-c9bf6" May 13 22:09:07.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:09:07.315: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:09:07.315: INFO: stdout: "affinity-clusterip-timeout-c9bf6" May 13 22:09:27.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:09:27.575: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:09:27.575: INFO: stdout: "affinity-clusterip-timeout-c9bf6" May 13 22:09:47.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4799 exec execpod-affinitym74k6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.34.73:80/' May 13 22:09:47.825: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.34.73:80/\n" May 13 22:09:47.825: INFO: stdout: "affinity-clusterip-timeout-dg2xn" May 13 22:09:47.825: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4799, will wait for the garbage collector to delete the pods May 13 22:09:47.893: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.600959ms May 13 22:09:47.993: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.638166ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:10:02.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4799" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:133.117 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":250,"failed":0} May 13 22:10:02.420: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:45.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0513 22:07:45.998058 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:12:46.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-122" for this suite. • [SLOW TEST:300.046 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":18,"skipped":359,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} May 13 22:12:46.025: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:07:47.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3982 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-3982 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3982 May 13 22:07:47.930: INFO: Found 0 stateful pods, waiting for 1 May 13 22:07:57.933: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 13 22:07:57.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:07:58.226: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:07:58.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:07:58.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:07:58.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 13 22:08:08.233: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 22:08:08.233: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:08:08.244: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:08.244: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:08.244: INFO: May 13 22:08:08.244: INFO: StatefulSet ss has not reached scale 3, at 1 May 13 22:08:09.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997799944s May 13 22:08:10.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993485076s May 13 22:08:11.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990121626s May 13 22:08:12.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.987352968s May 13 22:08:13.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982873656s May 13 22:08:14.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.978903741s May 13 22:08:15.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.974953093s May 13 22:08:16.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.971751843s May 13 22:08:17.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 968.642317ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3982 May 13 22:08:18.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:08:18.545: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 13 22:08:18.545: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:08:18.545: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:08:18.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:08:19.106: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 13 22:08:19.106: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:08:19.106: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:08:19.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:08:19.354: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 13 22:08:19.354: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 13 22:08:19.354: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 13 22:08:19.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:08:19.357: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false May 13 22:08:29.363: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 13 22:08:29.363: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 13 22:08:29.363: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 13 22:08:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:08:29.645: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:08:29.645: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:08:29.645: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:08:29.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:08:29.962: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:08:29.962: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:08:29.962: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:08:29.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 13 22:08:30.219: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 13 22:08:30.219: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 13 22:08:30.219: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 13 22:08:30.219: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:08:30.222: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 13 22:08:40.232: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 13 22:08:40.232: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 13 22:08:40.232: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 13 22:08:40.241: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:40.242: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:40.242: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:40.242: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:40.242: INFO: May 13 22:08:40.242: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 22:08:41.245: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:41.245: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:41.245: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:41.245: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:41.245: INFO: May 13 22:08:41.245: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 22:08:42.251: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:42.251: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:42.251: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:42.251: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:42.251: INFO: May 13 22:08:42.251: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 22:08:43.255: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:43.255: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:43.255: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:43.255: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:43.255: INFO: May 13 22:08:43.255: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 22:08:44.258: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:44.258: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:07:47 +0000 UTC }] May 13 22:08:44.258: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:44.258: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:44.258: INFO: May 13 22:08:44.258: INFO: StatefulSet ss has not reached scale 0, at 3 May 13 22:08:45.261: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:45.262: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:45.262: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:45.262: INFO: May 13 22:08:45.262: INFO: StatefulSet ss has not reached scale 0, at 2 May 13 22:08:46.266: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:46.266: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:46.266: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:46.266: INFO: May 13 22:08:46.266: INFO: StatefulSet ss has not reached scale 0, at 2 May 13 22:08:47.270: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:47.270: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:47.270: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:47.270: INFO: May 13 22:08:47.270: INFO: StatefulSet ss has not reached scale 0, at 2 May 13 22:08:48.273: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:48.273: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:48.273: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:48.274: INFO: May 13 22:08:48.274: INFO: StatefulSet ss has not reached scale 0, at 2 May 13 22:08:49.278: INFO: POD NODE PHASE GRACE CONDITIONS May 13 22:08:49.278: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:49.278: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 22:08:08 +0000 UTC }] May 13 22:08:49.278: INFO: May 13 22:08:49.278: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3982 May 13 22:08:50.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:08:50.492: INFO: rc: 1 May 13 22:08:50.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 13 22:09:00.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:00.646: INFO: rc: 1 May 13 22:09:00.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:09:10.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:10.789: INFO: rc: 1 May 13 22:09:10.789: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:09:20.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:20.922: INFO: rc: 1 May 13 22:09:20.922: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:09:30.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:31.095: INFO: rc: 1 May 13 22:09:31.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:09:41.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:41.237: INFO: rc: 1 May 13 22:09:41.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:09:51.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:09:51.387: INFO: rc: 1 May 13 22:09:51.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:01.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:01.535: INFO: rc: 1 May 13 22:10:01.535: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:11.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:11.696: INFO: rc: 1 May 13 22:10:11.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:21.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:21.855: INFO: rc: 1 May 13 22:10:21.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:31.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:32.008: INFO: rc: 1 May 13 22:10:32.008: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:42.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:42.173: INFO: rc: 1 May 13 22:10:42.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:10:52.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:10:52.322: INFO: rc: 1 May 13 22:10:52.322: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:02.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:02.476: INFO: rc: 1 May 13 22:11:02.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:12.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:12.614: INFO: rc: 1 May 13 22:11:12.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:22.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:22.753: INFO: rc: 1 May 13 22:11:22.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:32.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:32.903: INFO: rc: 1 May 13 22:11:32.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:42.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:43.058: INFO: rc: 1 May 13 22:11:43.058: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:11:53.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:11:53.221: INFO: rc: 1 May 13 22:11:53.221: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:03.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:03.371: INFO: rc: 1 May 13 22:12:03.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:13.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:13.541: INFO: rc: 1 May 13 22:12:13.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:23.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:23.700: INFO: rc: 1 May 13 22:12:23.700: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:33.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:33.836: INFO: rc: 1 May 13 22:12:33.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:43.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:43.991: INFO: rc: 1 May 13 22:12:43.991: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:12:53.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:12:54.130: INFO: rc: 1 May 13 22:12:54.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:04.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:04.285: INFO: rc: 1 May 13 22:13:04.285: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:14.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:14.434: INFO: rc: 1 May 13 22:13:14.434: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:24.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:24.581: INFO: rc: 1 May 13 22:13:24.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:34.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:34.733: INFO: rc: 1 May 13 22:13:34.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:44.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:44.887: INFO: rc: 1 May 13 22:13:44.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 13 22:13:54.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3982 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 13 22:13:55.050: INFO: rc: 1 May 13 22:13:55.050: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 13 22:13:55.050: INFO: Scaling statefulset ss to 0 May 13 22:13:55.072: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 13 22:13:55.075: INFO: Deleting all statefulset in ns statefulset-3982 May 13 22:13:55.078: INFO: Scaling statefulset ss to 0 May 13 22:13:55.085: INFO: Waiting for statefulset status.replicas updated to 0 May 13 22:13:55.088: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:13:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3982" for this suite. • [SLOW TEST:367.205 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":31,"skipped":517,"failed":0} May 13 22:13:55.111: INFO: Running AfterSuite actions on all nodes May 13 22:13:55.111: INFO: Running AfterSuite actions on node 1 May 13 22:13:55.111: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 Ran 320 of 5773 Specs in 946.116 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 15m47.7135472s Test Suite Failed