Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620840044 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 12 17:20:45.852: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:45.857: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 17:20:45.886: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 17:20:45.952: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 17:20:45.952: INFO: The status of Pod cmk-init-discover-node2-qrd9v is Succeeded, skipping waiting May 12 17:20:45.952: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 17:20:45.952: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 12 17:20:45.952: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 17:20:45.969: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 12 17:20:45.969: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 12 17:20:45.969: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 12 17:20:45.969: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 12 17:20:45.969: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 12 17:20:45.969: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 12 17:20:45.969: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 12 17:20:45.969: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 17:20:45.969: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 12 17:20:45.969: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 12 17:20:45.969: INFO: e2e test version: v1.19.10 May 12 17:20:45.969: INFO: kube-apiserver version: v1.19.8 May 12 17:20:45.970: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:45.975: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ May 12 17:20:45.984: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.004: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 12 17:20:45.996: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.019: INFO: Cluster IP family: ipv4 SS ------------------------------ May 12 17:20:46.006: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.021: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ May 12 17:20:46.006: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.027: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 12 17:20:46.012: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.032: INFO: Cluster IP family: ipv4 SS ------------------------------ May 12 17:20:46.007: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.032: INFO: Cluster IP family: ipv4 May 12 17:20:46.014: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.033: INFO: Cluster IP family: ipv4 S ------------------------------ May 12 17:20:46.013: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.033: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ May 12 17:20:46.018: INFO: >>> kubeConfig: /root/.kube/config May 12 17:20:46.040: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test May 12 17:20:46.121: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.123: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:46.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6627" for this suite. •SSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools May 12 17:20:46.811: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.812: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 May 12 17:20:46.814: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-6974" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl May 12 17:20:46.268: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.270: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:48.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5158" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":63,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 12 17:20:46.029: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.032: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:50.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8109" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 12 17:20:46.271: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.273: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 May 12 17:20:46.290: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8" in namespace "security-context-test-9394" to be "Succeeded or Failed" May 12 17:20:46.292: INFO: Pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479765ms May 12 17:20:48.296: INFO: Pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005694562s May 12 17:20:50.299: INFO: Pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009507849s May 12 17:20:52.304: INFO: Pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013611027s May 12 17:20:52.304: INFO: Pod "alpine-nnp-nil-6fd94965-89de-4956-96b1-cd8dae2826d8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:52.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9394" for this suite. • [SLOW TEST:6.074 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 12 17:20:46.240: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.242: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 17:20:54.291: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:54.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-122" for this suite. • [SLOW TEST:8.088 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 12 17:20:46.395: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.397: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 May 12 17:20:46.410: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535" in namespace "security-context-test-7895" to be "Succeeded or Failed" May 12 17:20:46.413: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.659553ms May 12 17:20:48.417: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006442197s May 12 17:20:50.421: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01020592s May 12 17:20:52.429: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018094837s May 12 17:20:54.432: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02138188s May 12 17:20:54.432: INFO: Pod "alpine-nnp-true-93b82d0d-d9e1-4f1e-bc1e-a60c04017535" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:54.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7895" for this suite. • [SLOW TEST:8.069 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:54.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:54.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7292" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":2,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 12 17:20:46.423: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.425: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 May 12 17:20:46.439: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e" in namespace "security-context-test-3222" to be "Succeeded or Failed" May 12 17:20:46.442: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426712ms May 12 17:20:48.445: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005646114s May 12 17:20:50.448: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008584134s May 12 17:20:52.451: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011664455s May 12 17:20:54.453: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013761281s May 12 17:20:54.453: INFO: Pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e" satisfied condition "Succeeded or Failed" May 12 17:20:54.874: INFO: Got logs for pod "busybox-privileged-true-ecca3fe4-0c65-4c47-815b-65e56fd2c48e": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:54.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3222" for this suite. • [SLOW TEST:8.478 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ SSS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:55.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 May 12 17:20:55.283: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:55.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5936" for this suite. S [SKIPPING] [0.029 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:55.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:55.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3864" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":2,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:55.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:20:55.609: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:55.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8661" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:20:55.623865 24 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 275 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001620d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00219c750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0044e3dc0, 0xc00219c750, 0xc0044e3dc0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00219c750, 0x49181f0283d397, 0xc00219c778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x8c, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0044eeed0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00190b3e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00190b3e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000883fc0, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00219d6c0, 0xc0013b4000, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0013b4000, 0x0, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0013b4000, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001d8e280, 0xc0013b4000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001d8e280, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001d8e280, 0xc002f4e0e8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016c280, 0x7f239c8d1758, 0xc00451d980, 0x4c22012, 0x14, 0xc004549f50, 0x3, 0x3, 0x5396840, 0xc000160900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc00451d980, 0x4c22012, 0x14, 0xc00461b6c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc00451d980, 0x4c22012, 0x14, 0xc0045ec920, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00451d980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00451d980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00451d980, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 12 17:20:46.629: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.631: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 May 12 17:20:46.646: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8" in namespace "security-context-test-1231" to be "Succeeded or Failed" May 12 17:20:46.648: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234976ms May 12 17:20:48.653: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006686242s May 12 17:20:50.656: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010439022s May 12 17:20:52.660: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014452913s May 12 17:20:54.663: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016936606s May 12 17:20:56.667: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8": Phase="Failed", Reason="", readiness=false. Elapsed: 10.020950981s May 12 17:20:56.667: INFO: Pod "busybox-readonly-true-b61c5baa-a19f-4f8a-923f-59cbc80c9cc8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:56.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1231" for this suite. • [SLOW TEST:10.067 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples May 12 17:20:46.217: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 17:20:46.219: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 12 17:20:46.228: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod May 12 17:20:46.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-277 create -f -' May 12 17:20:46.706: INFO: stderr: "" May 12 17:20:46.706: INFO: stdout: "secret/test-secret created\n" May 12 17:20:46.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-277 create -f -' May 12 17:20:46.957: INFO: stderr: "" May 12 17:20:46.957: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 12 17:20:56.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-277 logs secret-test-pod test-container' May 12 17:20:57.137: INFO: stderr: "" May 12 17:20:57.137: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:57.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-277" for this suite. • [SLOW TEST:10.949 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:48.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:57.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8349" for this suite. • [SLOW TEST:9.087 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":2,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:50.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:20:58.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4313" for this suite. • [SLOW TEST:8.047 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:54.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 May 12 17:20:55.019: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3754" to be "Succeeded or Failed" May 12 17:20:55.021: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419111ms May 12 17:20:57.025: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005809124s May 12 17:20:59.029: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009793711s May 12 17:21:01.032: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013456867s May 12 17:21:01.032: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:01.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3754" for this suite. • [SLOW TEST:6.061 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:57.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:03.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9308" for this suite. • [SLOW TEST:6.052 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":2,"skipped":216,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:56.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 12 17:20:56.981: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod May 12 17:20:57.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5327 create -f -' May 12 17:20:57.349: INFO: stderr: "" May 12 17:20:57.349: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 12 17:21:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5327 logs dapi-test-pod test-container' May 12 17:21:03.515: INFO: stderr: "" May 12 17:21:03.515: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5327\nMY_POD_IP=10.244.3.37\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 12 17:21:03.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5327 logs dapi-test-pod test-container' May 12 17:21:03.689: INFO: stderr: "" May 12 17:21:03.689: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5327\nMY_POD_IP=10.244.3.37\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:03.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5327" for this suite. • [SLOW TEST:6.744 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":2,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:57.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:03.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2097" for this suite. • [SLOW TEST:6.042 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:01.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:04.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4830" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:04.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:21:04.558: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:04.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2035" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:21:04.567272 35 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 218 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0013fa750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003ef3f80, 0xc0013fa750, 0xc003ef3f80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0013fa750, 0x49182117951331, 0xc0013fa778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x90, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004048b10, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0019de3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0019de3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000586818, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0013fb6c0, 0xc00361bef0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00361bef0, 0x0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00361bef0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00284e000, 0xc00361bef0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00284e000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00284e000, 0xc002844030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f9d8f37cb10, 0xc002c8b380, 0x4c22012, 0x14, 0xc001fced50, 0x3, 0x3, 0x5396840, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc002c8b380, 0x4c22012, 0x14, 0xc002e2ca80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc002c8b380, 0x4c22012, 0x14, 0xc002b66bc0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c8b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002c8b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002c8b380, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:04.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 May 12 17:21:04.701: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:04.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-2874" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:58.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:04.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1253" for this suite. • [SLOW TEST:6.055 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":3,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:04.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:21:04.942: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:04.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5815" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:21:04.951326 35 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 218 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0013fa750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004caf480, 0xc0013fa750, 0xc004caf480, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0013fa750, 0x4918212e7a1b85, 0xc0013fa778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x8e, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004049cb0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0019de3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0019de3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000586818, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0013fb6c0, 0xc00361c2d0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00361c2d0, 0x0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00361c2d0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00284e000, 0xc00361c2d0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00284e000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00284e000, 0xc002844030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f9d8f37cb10, 0xc002c8b380, 0x4c22012, 0x14, 0xc001fced50, 0x3, 0x3, 0x5396840, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc002c8b380, 0x4c22012, 0x14, 0xc002e2ca80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc002c8b380, 0x4c22012, 0x14, 0xc002b66bc0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c8b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002c8b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002c8b380, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:08.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4196" for this suite. • [SLOW TEST:22.071 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":2,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:09.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:21:09.109: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:09.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6137" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:21:09.121035 26 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 127 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000a42750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0023c37c0, 0xc000a42750, 0xc0023c37c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000a42750, 0x49182227021ca3, 0xc000a42778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x91, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004ca0570, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001426540, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001426540, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0007373e0, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000a436c0, 0xc00173be00, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00173be00, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00173be00, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001bd6000, 0xc00173be00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001bd6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001bd6000, 0xc001bd0030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7fd4f4043f00, 0xc000c72f00, 0x4c22012, 0x14, 0xc0045870e0, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc000c72f00, 0x4c22012, 0x14, 0xc002dc7d80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc000c72f00, 0x4c22012, 0x14, 0xc004581e00, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c72f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000c72f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000c72f00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:04.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 May 12 17:21:04.590: INFO: Waiting up to 5m0s for pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8" in namespace "security-context-test-5625" to be "Succeeded or Failed" May 12 17:21:04.592: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.849049ms May 12 17:21:06.596: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005135858s May 12 17:21:08.599: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008222302s May 12 17:21:10.602: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011866127s May 12 17:21:12.606: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015392319s May 12 17:21:12.606: INFO: Pod "busybox-user-0-d6aa64ac-292b-4f6b-af49-edfcb43faae8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:12.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5625" for this suite. • [SLOW TEST:8.055 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:13.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:21:13.078: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:13.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1014" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:21:13.087678 28 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 242 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0021a6750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000dead20, 0xc0021a6750, 0xc000dead20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0021a6750, 0x491823136e9a7f, 0xc0021a6778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x91, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0009e8600, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0012b3920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0012b3920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000010350, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0021a76c0, 0xc00293a0f0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00293a0f0, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00293a0f0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00248cf00, 0xc00293a0f0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00248cf00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00248cf00, 0xc00304aba8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f9f87bedaa0, 0xc001f59080, 0x4c22012, 0x14, 0xc002bb4e40, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001f59080, 0x4c22012, 0x14, 0xc002537300, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001f59080, 0x4c22012, 0x14, 0xc00098af40, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001f59080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001f59080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001f59080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:13.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 12 17:21:13.150: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:13.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5183" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0512 17:21:13.159398 28 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 242 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0021a6750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0045b1140, 0xc0021a6750, 0xc0045b1140, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0021a6750, 0x49182317b77d64, 0xc0021a6778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x93, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00425ab40, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0012b3920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0012b3920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000010350, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0021a76c0, 0xc00293a000, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00293a000, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00293a000, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00248cf00, 0xc00293a000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00248cf00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00248cf00, 0xc00304aba8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f9f87bedaa0, 0xc001f59080, 0x4c22012, 0x14, 0xc002bb4e40, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001f59080, 0x4c22012, 0x14, 0xc002537300, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001f59080, 0x4c22012, 0x14, 0xc00098af40, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001f59080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001f59080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001f59080, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 12 17:21:13.535: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:03.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container May 12 17:21:13.594: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8424 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:21:13.594: INFO: >>> kubeConfig: /root/.kube/config May 12 17:21:13.907: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-8424 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:21:13.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 12 17:21:14.258: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8424 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:21:14.258: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:14.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-8424" for this suite. • [SLOW TEST:10.844 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":3,"skipped":230,"failed":0} May 12 17:21:14.399: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:05.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 May 12 17:21:05.063: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3574" to be "Succeeded or Failed" May 12 17:21:05.066: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983536ms May 12 17:21:07.068: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005453415s May 12 17:21:09.071: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008539179s May 12 17:21:11.074: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011272402s May 12 17:21:13.077: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014472542s May 12 17:21:15.083: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02000992s May 12 17:21:15.083: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:15.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3574" for this suite. • [SLOW TEST:10.064 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":1012,"failed":0} May 12 17:21:15.100: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:09.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:15.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1613" for this suite. • [SLOW TEST:6.077 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":433,"failed":0} May 12 17:21:15.210: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:04.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-c52090a8-7efa-4433-82e2-1f354d71871c in namespace container-probe-7868 May 12 17:21:08.441: INFO: Started pod liveness-c52090a8-7efa-4433-82e2-1f354d71871c in namespace container-probe-7868 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:21:08.443: INFO: Initial restart count of pod liveness-c52090a8-7efa-4433-82e2-1f354d71871c is 0 May 12 17:21:26.476: INFO: Restart count of pod container-probe-7868/liveness-c52090a8-7efa-4433-82e2-1f354d71871c is now 1 (18.033041096s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:21:26.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7868" for this suite. • [SLOW TEST:22.089 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":524,"failed":0} May 12 17:21:26.492: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:21:05.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 12 17:21:05.045: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 May 12 17:21:05.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9681 create -f -' May 12 17:21:05.410: INFO: stderr: "" May 12 17:21:05.410: INFO: stdout: "pod/liveness-exec created\n" May 12 17:21:05.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9681 create -f -' May 12 17:21:05.696: INFO: stderr: "" May 12 17:21:05.696: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 12 17:21:13.703: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:13.703: INFO: Pod: liveness-http, restart count:0 May 12 17:21:15.706: INFO: Pod: liveness-http, restart count:0 May 12 17:21:15.706: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:17.710: INFO: Pod: liveness-http, restart count:0 May 12 17:21:17.710: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:19.713: INFO: Pod: liveness-http, restart count:0 May 12 17:21:19.713: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:21.716: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:21.716: INFO: Pod: liveness-http, restart count:0 May 12 17:21:23.721: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:23.721: INFO: Pod: liveness-http, restart count:0 May 12 17:21:25.724: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:25.724: INFO: Pod: liveness-http, restart count:0 May 12 17:21:27.728: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:27.728: INFO: Pod: liveness-http, restart count:0 May 12 17:21:29.731: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:29.731: INFO: Pod: liveness-http, restart count:0 May 12 17:21:31.733: INFO: Pod: liveness-http, restart count:0 May 12 17:21:31.734: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:33.737: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:33.737: INFO: Pod: liveness-http, restart count:0 May 12 17:21:35.742: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:35.743: INFO: Pod: liveness-http, restart count:0 May 12 17:21:37.746: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:37.746: INFO: Pod: liveness-http, restart count:0 May 12 17:21:39.750: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:39.750: INFO: Pod: liveness-http, restart count:0 May 12 17:21:41.754: INFO: Pod: liveness-http, restart count:0 May 12 17:21:41.754: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:43.758: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:43.759: INFO: Pod: liveness-http, restart count:0 May 12 17:21:45.761: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:45.761: INFO: Pod: liveness-http, restart count:0 May 12 17:21:47.764: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:47.765: INFO: Pod: liveness-http, restart count:0 May 12 17:21:49.767: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:49.767: INFO: Pod: liveness-http, restart count:0 May 12 17:21:51.773: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:51.773: INFO: Pod: liveness-http, restart count:0 May 12 17:21:53.777: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:53.777: INFO: Pod: liveness-http, restart count:0 May 12 17:21:55.780: INFO: Pod: liveness-http, restart count:0 May 12 17:21:55.780: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:57.784: INFO: Pod: liveness-http, restart count:1 May 12 17:21:57.784: INFO: Saw liveness-http restart, succeeded... May 12 17:21:57.784: INFO: Pod: liveness-exec, restart count:0 May 12 17:21:59.787: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:01.793: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:03.799: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:05.803: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:07.807: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:09.810: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:11.817: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:13.820: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:15.822: INFO: Pod: liveness-exec, restart count:0 May 12 17:22:17.827: INFO: Pod: liveness-exec, restart count:1 May 12 17:22:17.827: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:22:17.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9681" for this suite. • [SLOW TEST:72.815 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:55.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-ffcf8c01-c1ba-417e-82e4-9d000a4914da in namespace container-probe-5658 May 12 17:21:01.963: INFO: Started pod liveness-ffcf8c01-c1ba-417e-82e4-9d000a4914da in namespace container-probe-5658 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:21:01.966: INFO: Initial restart count of pod liveness-ffcf8c01-c1ba-417e-82e4-9d000a4914da is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:25:02.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5658" for this suite. • [SLOW TEST:246.550 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":3,"skipped":625,"failed":0} May 12 17:25:02.476: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:55.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready May 12 17:20:55.475: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 12 17:20:56.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:20:56.493: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:20:57.486: INFO: node status heartbeat is unchanged for 999.239791ms, waiting for 1m20s May 12 17:20:58.487: INFO: node status heartbeat is unchanged for 2.000595813s, waiting for 1m20s May 12 17:20:59.485: INFO: node status heartbeat is unchanged for 2.99897966s, waiting for 1m20s May 12 17:21:00.487: INFO: node status heartbeat is unchanged for 4.00015671s, waiting for 1m20s May 12 17:21:01.488: INFO: node status heartbeat is unchanged for 5.001026238s, waiting for 1m20s May 12 17:21:02.488: INFO: node status heartbeat is unchanged for 6.001341973s, waiting for 1m20s May 12 17:21:03.486: INFO: node status heartbeat is unchanged for 6.999973176s, waiting for 1m20s May 12 17:21:04.487: INFO: node status heartbeat is unchanged for 8.000584204s, waiting for 1m20s May 12 17:21:05.487: INFO: node status heartbeat is unchanged for 9.000179468s, waiting for 1m20s May 12 17:21:06.486: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:06.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:20:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:21:07.489: INFO: node status heartbeat is unchanged for 1.003375758s, waiting for 1m20s May 12 17:21:08.486: INFO: node status heartbeat is unchanged for 1.999537683s, waiting for 1m20s May 12 17:21:09.486: INFO: node status heartbeat is unchanged for 2.999862514s, waiting for 1m20s May 12 17:21:10.486: INFO: node status heartbeat is unchanged for 4.000190412s, waiting for 1m20s May 12 17:21:11.489: INFO: node status heartbeat is unchanged for 5.002607019s, waiting for 1m20s May 12 17:21:12.486: INFO: node status heartbeat is unchanged for 6.000287483s, waiting for 1m20s May 12 17:21:13.487: INFO: node status heartbeat is unchanged for 7.000851386s, waiting for 1m20s May 12 17:21:14.486: INFO: node status heartbeat is unchanged for 7.999777638s, waiting for 1m20s May 12 17:21:15.487: INFO: node status heartbeat is unchanged for 9.001331719s, waiting for 1m20s May 12 17:21:16.486: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:16.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "eebaf4858fee4a739009f2f7f2717953", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "15bd8119-98f5-4e13-a2fa-897c2224dd6e", KernelVersion: "3.10.0-1160.25.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 13 identical elements {Names: []string{"k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b", "k8s.gcr.io/kube-scheduler:v1.19.8"}, SizeBytes: 46510430}, {Names: []string{"localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44392919}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", + "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", + }, + SizeBytes: 42321438, + }, {Names: []string{"quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee", "quay.io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b", "localhost:30500/tas-controller:0.1"}, SizeBytes: 22922439}, ... // 2 identical elements {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0", + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, + { + Names: []string{ + "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", + "busybox:1.29", + }, + SizeBytes: 1154361, + }, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } May 12 17:21:17.488: INFO: node status heartbeat is unchanged for 1.001847281s, waiting for 1m20s May 12 17:21:18.487: INFO: node status heartbeat is unchanged for 2.001299768s, waiting for 1m20s May 12 17:21:19.488: INFO: node status heartbeat is unchanged for 3.001386592s, waiting for 1m20s May 12 17:21:20.487: INFO: node status heartbeat is unchanged for 4.000449757s, waiting for 1m20s May 12 17:21:21.487: INFO: node status heartbeat is unchanged for 5.000519267s, waiting for 1m20s May 12 17:21:22.486: INFO: node status heartbeat is unchanged for 5.999983661s, waiting for 1m20s May 12 17:21:23.488: INFO: node status heartbeat is unchanged for 7.001436146s, waiting for 1m20s May 12 17:21:24.487: INFO: node status heartbeat is unchanged for 8.000630247s, waiting for 1m20s May 12 17:21:25.487: INFO: node status heartbeat is unchanged for 9.000712436s, waiting for 1m20s May 12 17:21:26.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:26.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:21:27.488: INFO: node status heartbeat is unchanged for 1.001068993s, waiting for 1m20s May 12 17:21:28.489: INFO: node status heartbeat is unchanged for 2.001842802s, waiting for 1m20s May 12 17:21:29.486: INFO: node status heartbeat is unchanged for 2.999462674s, waiting for 1m20s May 12 17:21:30.489: INFO: node status heartbeat is unchanged for 4.00244117s, waiting for 1m20s May 12 17:21:31.488: INFO: node status heartbeat is unchanged for 5.001016506s, waiting for 1m20s May 12 17:21:32.488: INFO: node status heartbeat is unchanged for 6.000705733s, waiting for 1m20s May 12 17:21:33.488: INFO: node status heartbeat is unchanged for 7.001290203s, waiting for 1m20s May 12 17:21:34.486: INFO: node status heartbeat is unchanged for 7.999500951s, waiting for 1m20s May 12 17:21:35.489: INFO: node status heartbeat is unchanged for 9.002206486s, waiting for 1m20s May 12 17:21:36.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:36.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:21:37.487: INFO: node status heartbeat is unchanged for 1.000128755s, waiting for 1m20s May 12 17:21:38.489: INFO: node status heartbeat is unchanged for 2.002311877s, waiting for 1m20s May 12 17:21:39.487: INFO: node status heartbeat is unchanged for 3.000830744s, waiting for 1m20s May 12 17:21:40.488: INFO: node status heartbeat is unchanged for 4.001652674s, waiting for 1m20s May 12 17:21:41.487: INFO: node status heartbeat is unchanged for 5.000700972s, waiting for 1m20s May 12 17:21:42.488: INFO: node status heartbeat is unchanged for 6.00102682s, waiting for 1m20s May 12 17:21:43.487: INFO: node status heartbeat is unchanged for 6.999954841s, waiting for 1m20s May 12 17:21:44.487: INFO: node status heartbeat is unchanged for 8.000713662s, waiting for 1m20s May 12 17:21:45.487: INFO: node status heartbeat is unchanged for 9.000328353s, waiting for 1m20s May 12 17:21:46.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:46.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:21:47.488: INFO: node status heartbeat is unchanged for 1.000771593s, waiting for 1m20s May 12 17:21:48.487: INFO: node status heartbeat is unchanged for 1.999924062s, waiting for 1m20s May 12 17:21:49.487: INFO: node status heartbeat is unchanged for 2.999623585s, waiting for 1m20s May 12 17:21:50.487: INFO: node status heartbeat is unchanged for 3.999960866s, waiting for 1m20s May 12 17:21:51.486: INFO: node status heartbeat is unchanged for 4.99895458s, waiting for 1m20s May 12 17:21:52.487: INFO: node status heartbeat is unchanged for 5.999494238s, waiting for 1m20s May 12 17:21:53.487: INFO: node status heartbeat is unchanged for 6.99954348s, waiting for 1m20s May 12 17:21:54.486: INFO: node status heartbeat is unchanged for 7.999243069s, waiting for 1m20s May 12 17:21:55.489: INFO: node status heartbeat is unchanged for 9.001647161s, waiting for 1m20s May 12 17:21:56.486: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:21:56.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:21:57.488: INFO: node status heartbeat is unchanged for 1.001623596s, waiting for 1m20s May 12 17:21:58.488: INFO: node status heartbeat is unchanged for 2.00142569s, waiting for 1m20s May 12 17:21:59.486: INFO: node status heartbeat is unchanged for 2.999587852s, waiting for 1m20s May 12 17:22:00.487: INFO: node status heartbeat is unchanged for 4.000530792s, waiting for 1m20s May 12 17:22:01.486: INFO: node status heartbeat is unchanged for 5.000172962s, waiting for 1m20s May 12 17:22:02.486: INFO: node status heartbeat is unchanged for 6.00016132s, waiting for 1m20s May 12 17:22:03.489: INFO: node status heartbeat is unchanged for 7.002427392s, waiting for 1m20s May 12 17:22:04.486: INFO: node status heartbeat is unchanged for 8.000005471s, waiting for 1m20s May 12 17:22:05.487: INFO: node status heartbeat is unchanged for 9.000471873s, waiting for 1m20s May 12 17:22:06.487: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 12 17:22:06.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:21:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:07.487: INFO: node status heartbeat is unchanged for 999.741049ms, waiting for 1m20s May 12 17:22:08.488: INFO: node status heartbeat is unchanged for 2.000516274s, waiting for 1m20s May 12 17:22:09.488: INFO: node status heartbeat is unchanged for 3.000936647s, waiting for 1m20s May 12 17:22:10.486: INFO: node status heartbeat is unchanged for 3.999206505s, waiting for 1m20s May 12 17:22:11.487: INFO: node status heartbeat is unchanged for 5.00036663s, waiting for 1m20s May 12 17:22:12.488: INFO: node status heartbeat is unchanged for 6.000712073s, waiting for 1m20s May 12 17:22:13.487: INFO: node status heartbeat is unchanged for 7.000338611s, waiting for 1m20s May 12 17:22:14.488: INFO: node status heartbeat is unchanged for 8.000749519s, waiting for 1m20s May 12 17:22:15.486: INFO: node status heartbeat is unchanged for 8.999478662s, waiting for 1m20s May 12 17:22:16.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:22:16.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:17.487: INFO: node status heartbeat is unchanged for 1.00089959s, waiting for 1m20s May 12 17:22:18.488: INFO: node status heartbeat is unchanged for 2.001347316s, waiting for 1m20s May 12 17:22:19.487: INFO: node status heartbeat is unchanged for 3.001034196s, waiting for 1m20s May 12 17:22:20.488: INFO: node status heartbeat is unchanged for 4.001064783s, waiting for 1m20s May 12 17:22:21.487: INFO: node status heartbeat is unchanged for 5.000519168s, waiting for 1m20s May 12 17:22:22.487: INFO: node status heartbeat is unchanged for 6.000836541s, waiting for 1m20s May 12 17:22:23.487: INFO: node status heartbeat is unchanged for 7.000543334s, waiting for 1m20s May 12 17:22:24.487: INFO: node status heartbeat is unchanged for 8.000100742s, waiting for 1m20s May 12 17:22:25.487: INFO: node status heartbeat is unchanged for 9.000210904s, waiting for 1m20s May 12 17:22:26.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:22:26.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:27.488: INFO: node status heartbeat is unchanged for 1.001448022s, waiting for 1m20s May 12 17:22:28.487: INFO: node status heartbeat is unchanged for 1.999884944s, waiting for 1m20s May 12 17:22:29.486: INFO: node status heartbeat is unchanged for 2.999699845s, waiting for 1m20s May 12 17:22:30.488: INFO: node status heartbeat is unchanged for 4.001037844s, waiting for 1m20s May 12 17:22:31.487: INFO: node status heartbeat is unchanged for 4.999862322s, waiting for 1m20s May 12 17:22:32.487: INFO: node status heartbeat is unchanged for 6.000263079s, waiting for 1m20s May 12 17:22:33.488: INFO: node status heartbeat is unchanged for 7.001476913s, waiting for 1m20s May 12 17:22:34.487: INFO: node status heartbeat is unchanged for 8.000515163s, waiting for 1m20s May 12 17:22:35.487: INFO: node status heartbeat is unchanged for 8.99990196s, waiting for 1m20s May 12 17:22:36.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:22:36.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:37.486: INFO: node status heartbeat is unchanged for 999.197398ms, waiting for 1m20s May 12 17:22:38.487: INFO: node status heartbeat is unchanged for 1.999594346s, waiting for 1m20s May 12 17:22:39.488: INFO: node status heartbeat is unchanged for 3.000729426s, waiting for 1m20s May 12 17:22:40.486: INFO: node status heartbeat is unchanged for 3.999125154s, waiting for 1m20s May 12 17:22:41.487: INFO: node status heartbeat is unchanged for 4.99954851s, waiting for 1m20s May 12 17:22:42.487: INFO: node status heartbeat is unchanged for 5.999636767s, waiting for 1m20s May 12 17:22:43.487: INFO: node status heartbeat is unchanged for 6.999740794s, waiting for 1m20s May 12 17:22:44.487: INFO: node status heartbeat is unchanged for 7.999814846s, waiting for 1m20s May 12 17:22:45.488: INFO: node status heartbeat is unchanged for 9.000357959s, waiting for 1m20s May 12 17:22:46.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:22:46.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:47.487: INFO: node status heartbeat is unchanged for 1.000603004s, waiting for 1m20s May 12 17:22:48.487: INFO: node status heartbeat is unchanged for 2.000841327s, waiting for 1m20s May 12 17:22:49.487: INFO: node status heartbeat is unchanged for 3.000170197s, waiting for 1m20s May 12 17:22:50.487: INFO: node status heartbeat is unchanged for 4.000624961s, waiting for 1m20s May 12 17:22:51.486: INFO: node status heartbeat is unchanged for 4.999621997s, waiting for 1m20s May 12 17:22:52.488: INFO: node status heartbeat is unchanged for 6.001422354s, waiting for 1m20s May 12 17:22:53.487: INFO: node status heartbeat is unchanged for 7.000676501s, waiting for 1m20s May 12 17:22:54.487: INFO: node status heartbeat is unchanged for 8.000762052s, waiting for 1m20s May 12 17:22:55.487: INFO: node status heartbeat is unchanged for 9.000536502s, waiting for 1m20s May 12 17:22:56.486: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:22:56.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:22:57.488: INFO: node status heartbeat is unchanged for 1.002072023s, waiting for 1m20s May 12 17:22:58.487: INFO: node status heartbeat is unchanged for 2.001423884s, waiting for 1m20s May 12 17:22:59.486: INFO: node status heartbeat is unchanged for 3.000127786s, waiting for 1m20s May 12 17:23:00.486: INFO: node status heartbeat is unchanged for 4.000558983s, waiting for 1m20s May 12 17:23:01.487: INFO: node status heartbeat is unchanged for 5.001126393s, waiting for 1m20s May 12 17:23:02.486: INFO: node status heartbeat is unchanged for 6.000297814s, waiting for 1m20s May 12 17:23:03.487: INFO: node status heartbeat is unchanged for 7.000742119s, waiting for 1m20s May 12 17:23:04.487: INFO: node status heartbeat is unchanged for 8.000882685s, waiting for 1m20s May 12 17:23:05.487: INFO: node status heartbeat is unchanged for 9.000627482s, waiting for 1m20s May 12 17:23:06.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:06.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:22:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:07.488: INFO: node status heartbeat is unchanged for 1.000702227s, waiting for 1m20s May 12 17:23:08.487: INFO: node status heartbeat is unchanged for 2.000020577s, waiting for 1m20s May 12 17:23:09.487: INFO: node status heartbeat is unchanged for 2.999518398s, waiting for 1m20s May 12 17:23:10.487: INFO: node status heartbeat is unchanged for 3.999951211s, waiting for 1m20s May 12 17:23:11.486: INFO: node status heartbeat is unchanged for 4.999093192s, waiting for 1m20s May 12 17:23:12.487: INFO: node status heartbeat is unchanged for 5.999542127s, waiting for 1m20s May 12 17:23:13.487: INFO: node status heartbeat is unchanged for 6.999820178s, waiting for 1m20s May 12 17:23:14.487: INFO: node status heartbeat is unchanged for 8.000157006s, waiting for 1m20s May 12 17:23:15.486: INFO: node status heartbeat is unchanged for 8.999288264s, waiting for 1m20s May 12 17:23:16.488: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:16.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:17.488: INFO: node status heartbeat is unchanged for 1.000375272s, waiting for 1m20s May 12 17:23:18.487: INFO: node status heartbeat is unchanged for 1.999316686s, waiting for 1m20s May 12 17:23:19.486: INFO: node status heartbeat is unchanged for 2.998794439s, waiting for 1m20s May 12 17:23:20.487: INFO: node status heartbeat is unchanged for 3.999239167s, waiting for 1m20s May 12 17:23:21.486: INFO: node status heartbeat is unchanged for 4.998374215s, waiting for 1m20s May 12 17:23:22.487: INFO: node status heartbeat is unchanged for 5.999158382s, waiting for 1m20s May 12 17:23:23.487: INFO: node status heartbeat is unchanged for 6.999230337s, waiting for 1m20s May 12 17:23:24.486: INFO: node status heartbeat is unchanged for 7.998573126s, waiting for 1m20s May 12 17:23:25.487: INFO: node status heartbeat is unchanged for 8.999115377s, waiting for 1m20s May 12 17:23:26.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:26.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:27.486: INFO: node status heartbeat is unchanged for 999.214544ms, waiting for 1m20s May 12 17:23:28.487: INFO: node status heartbeat is unchanged for 1.999741528s, waiting for 1m20s May 12 17:23:29.486: INFO: node status heartbeat is unchanged for 2.999329526s, waiting for 1m20s May 12 17:23:30.488: INFO: node status heartbeat is unchanged for 4.00048196s, waiting for 1m20s May 12 17:23:31.487: INFO: node status heartbeat is unchanged for 4.999776913s, waiting for 1m20s May 12 17:23:32.487: INFO: node status heartbeat is unchanged for 5.999699977s, waiting for 1m20s May 12 17:23:33.487: INFO: node status heartbeat is unchanged for 6.999731237s, waiting for 1m20s May 12 17:23:34.488: INFO: node status heartbeat is unchanged for 8.000509836s, waiting for 1m20s May 12 17:23:35.487: INFO: node status heartbeat is unchanged for 8.999620655s, waiting for 1m20s May 12 17:23:36.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:36.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:37.487: INFO: node status heartbeat is unchanged for 999.995347ms, waiting for 1m20s May 12 17:23:38.487: INFO: node status heartbeat is unchanged for 1.999821323s, waiting for 1m20s May 12 17:23:39.487: INFO: node status heartbeat is unchanged for 3.000386688s, waiting for 1m20s May 12 17:23:40.486: INFO: node status heartbeat is unchanged for 3.999180869s, waiting for 1m20s May 12 17:23:41.487: INFO: node status heartbeat is unchanged for 5.000487778s, waiting for 1m20s May 12 17:23:42.486: INFO: node status heartbeat is unchanged for 5.999668808s, waiting for 1m20s May 12 17:23:43.486: INFO: node status heartbeat is unchanged for 6.999737642s, waiting for 1m20s May 12 17:23:44.486: INFO: node status heartbeat is unchanged for 7.999524702s, waiting for 1m20s May 12 17:23:45.486: INFO: node status heartbeat is unchanged for 8.999680454s, waiting for 1m20s May 12 17:23:46.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:46.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:47.489: INFO: node status heartbeat is unchanged for 1.001995211s, waiting for 1m20s May 12 17:23:48.487: INFO: node status heartbeat is unchanged for 1.999467759s, waiting for 1m20s May 12 17:23:49.488: INFO: node status heartbeat is unchanged for 3.000449605s, waiting for 1m20s May 12 17:23:50.490: INFO: node status heartbeat is unchanged for 4.002650164s, waiting for 1m20s May 12 17:23:51.487: INFO: node status heartbeat is unchanged for 4.99946572s, waiting for 1m20s May 12 17:23:52.489: INFO: node status heartbeat is unchanged for 6.001633535s, waiting for 1m20s May 12 17:23:53.486: INFO: node status heartbeat is unchanged for 6.99897649s, waiting for 1m20s May 12 17:23:54.486: INFO: node status heartbeat is unchanged for 7.998707508s, waiting for 1m20s May 12 17:23:55.487: INFO: node status heartbeat is unchanged for 8.999256796s, waiting for 1m20s May 12 17:23:56.486: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:23:56.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:23:57.487: INFO: node status heartbeat is unchanged for 1.000574169s, waiting for 1m20s May 12 17:23:58.487: INFO: node status heartbeat is unchanged for 2.000511593s, waiting for 1m20s May 12 17:23:59.487: INFO: node status heartbeat is unchanged for 3.00041116s, waiting for 1m20s May 12 17:24:00.486: INFO: node status heartbeat is unchanged for 4.000253752s, waiting for 1m20s May 12 17:24:01.487: INFO: node status heartbeat is unchanged for 5.000453736s, waiting for 1m20s May 12 17:24:02.487: INFO: node status heartbeat is unchanged for 6.000496914s, waiting for 1m20s May 12 17:24:03.487: INFO: node status heartbeat is unchanged for 7.001044106s, waiting for 1m20s May 12 17:24:04.487: INFO: node status heartbeat is unchanged for 8.001067893s, waiting for 1m20s May 12 17:24:05.488: INFO: node status heartbeat is unchanged for 9.00160775s, waiting for 1m20s May 12 17:24:06.486: INFO: node status heartbeat is unchanged for 9.999652547s, waiting for 1m20s May 12 17:24:07.488: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:07.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:23:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:08.487: INFO: node status heartbeat is unchanged for 999.291856ms, waiting for 1m20s May 12 17:24:09.486: INFO: node status heartbeat is unchanged for 1.998258117s, waiting for 1m20s May 12 17:24:10.487: INFO: node status heartbeat is unchanged for 2.999654499s, waiting for 1m20s May 12 17:24:11.487: INFO: node status heartbeat is unchanged for 3.999423914s, waiting for 1m20s May 12 17:24:12.487: INFO: node status heartbeat is unchanged for 4.999609481s, waiting for 1m20s May 12 17:24:13.488: INFO: node status heartbeat is unchanged for 5.999942838s, waiting for 1m20s May 12 17:24:14.486: INFO: node status heartbeat is unchanged for 6.998646596s, waiting for 1m20s May 12 17:24:15.486: INFO: node status heartbeat is unchanged for 7.99865288s, waiting for 1m20s May 12 17:24:16.487: INFO: node status heartbeat is unchanged for 8.999149213s, waiting for 1m20s May 12 17:24:17.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:17.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:18.486: INFO: node status heartbeat is unchanged for 999.338673ms, waiting for 1m20s May 12 17:24:19.487: INFO: node status heartbeat is unchanged for 1.999572312s, waiting for 1m20s May 12 17:24:20.486: INFO: node status heartbeat is unchanged for 2.998897043s, waiting for 1m20s May 12 17:24:21.487: INFO: node status heartbeat is unchanged for 4.000051126s, waiting for 1m20s May 12 17:24:22.488: INFO: node status heartbeat is unchanged for 5.00114684s, waiting for 1m20s May 12 17:24:23.487: INFO: node status heartbeat is unchanged for 6.000302387s, waiting for 1m20s May 12 17:24:24.486: INFO: node status heartbeat is unchanged for 6.99882132s, waiting for 1m20s May 12 17:24:25.486: INFO: node status heartbeat is unchanged for 7.999046338s, waiting for 1m20s May 12 17:24:26.487: INFO: node status heartbeat is unchanged for 8.999977868s, waiting for 1m20s May 12 17:24:27.489: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:27.491: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:28.486: INFO: node status heartbeat is unchanged for 997.459629ms, waiting for 1m20s May 12 17:24:29.487: INFO: node status heartbeat is unchanged for 1.998434813s, waiting for 1m20s May 12 17:24:30.487: INFO: node status heartbeat is unchanged for 2.998238774s, waiting for 1m20s May 12 17:24:31.487: INFO: node status heartbeat is unchanged for 3.998397992s, waiting for 1m20s May 12 17:24:32.486: INFO: node status heartbeat is unchanged for 4.997924333s, waiting for 1m20s May 12 17:24:33.487: INFO: node status heartbeat is unchanged for 5.9982518s, waiting for 1m20s May 12 17:24:34.487: INFO: node status heartbeat is unchanged for 6.998128864s, waiting for 1m20s May 12 17:24:35.488: INFO: node status heartbeat is unchanged for 7.999937456s, waiting for 1m20s May 12 17:24:36.487: INFO: node status heartbeat is unchanged for 8.998293307s, waiting for 1m20s May 12 17:24:37.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:37.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:38.486: INFO: node status heartbeat is unchanged for 999.600095ms, waiting for 1m20s May 12 17:24:39.487: INFO: node status heartbeat is unchanged for 2.00073128s, waiting for 1m20s May 12 17:24:40.487: INFO: node status heartbeat is unchanged for 3.000564345s, waiting for 1m20s May 12 17:24:41.487: INFO: node status heartbeat is unchanged for 3.999985017s, waiting for 1m20s May 12 17:24:42.487: INFO: node status heartbeat is unchanged for 5.000675488s, waiting for 1m20s May 12 17:24:43.487: INFO: node status heartbeat is unchanged for 6.000785756s, waiting for 1m20s May 12 17:24:44.486: INFO: node status heartbeat is unchanged for 6.998959864s, waiting for 1m20s May 12 17:24:45.488: INFO: node status heartbeat is unchanged for 8.001224085s, waiting for 1m20s May 12 17:24:46.486: INFO: node status heartbeat is unchanged for 8.999258854s, waiting for 1m20s May 12 17:24:47.488: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:47.491: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:48.490: INFO: node status heartbeat is unchanged for 1.001759281s, waiting for 1m20s May 12 17:24:49.487: INFO: node status heartbeat is unchanged for 1.999295017s, waiting for 1m20s May 12 17:24:50.488: INFO: node status heartbeat is unchanged for 3.00025058s, waiting for 1m20s May 12 17:24:51.486: INFO: node status heartbeat is unchanged for 3.998422109s, waiting for 1m20s May 12 17:24:52.487: INFO: node status heartbeat is unchanged for 4.998674765s, waiting for 1m20s May 12 17:24:53.487: INFO: node status heartbeat is unchanged for 5.999034077s, waiting for 1m20s May 12 17:24:54.487: INFO: node status heartbeat is unchanged for 6.998548371s, waiting for 1m20s May 12 17:24:55.486: INFO: node status heartbeat is unchanged for 7.998188213s, waiting for 1m20s May 12 17:24:56.487: INFO: node status heartbeat is unchanged for 8.998688498s, waiting for 1m20s May 12 17:24:57.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:24:57.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:24:58.488: INFO: node status heartbeat is unchanged for 1.000665811s, waiting for 1m20s May 12 17:24:59.487: INFO: node status heartbeat is unchanged for 1.999683196s, waiting for 1m20s May 12 17:25:00.488: INFO: node status heartbeat is unchanged for 3.001290186s, waiting for 1m20s May 12 17:25:01.486: INFO: node status heartbeat is unchanged for 3.999408719s, waiting for 1m20s May 12 17:25:02.486: INFO: node status heartbeat is unchanged for 4.998646571s, waiting for 1m20s May 12 17:25:03.489: INFO: node status heartbeat is unchanged for 6.002566175s, waiting for 1m20s May 12 17:25:04.487: INFO: node status heartbeat is unchanged for 6.999890879s, waiting for 1m20s May 12 17:25:05.487: INFO: node status heartbeat is unchanged for 8.000265436s, waiting for 1m20s May 12 17:25:06.488: INFO: node status heartbeat is unchanged for 9.000734799s, waiting for 1m20s May 12 17:25:07.489: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:25:07.492: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:24:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:25:08.490: INFO: node status heartbeat is unchanged for 1.000349942s, waiting for 1m20s May 12 17:25:09.488: INFO: node status heartbeat is unchanged for 1.99836762s, waiting for 1m20s May 12 17:25:10.486: INFO: node status heartbeat is unchanged for 2.996822295s, waiting for 1m20s May 12 17:25:11.487: INFO: node status heartbeat is unchanged for 3.998228325s, waiting for 1m20s May 12 17:25:12.487: INFO: node status heartbeat is unchanged for 4.99792469s, waiting for 1m20s May 12 17:25:13.487: INFO: node status heartbeat is unchanged for 5.998259664s, waiting for 1m20s May 12 17:25:14.486: INFO: node status heartbeat is unchanged for 6.997069036s, waiting for 1m20s May 12 17:25:15.488: INFO: node status heartbeat is unchanged for 7.999220589s, waiting for 1m20s May 12 17:25:16.488: INFO: node status heartbeat is unchanged for 8.998338697s, waiting for 1m20s May 12 17:25:17.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:25:17.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:25:18.490: INFO: node status heartbeat is unchanged for 1.002859454s, waiting for 1m20s May 12 17:25:19.487: INFO: node status heartbeat is unchanged for 1.999965974s, waiting for 1m20s May 12 17:25:20.487: INFO: node status heartbeat is unchanged for 2.999828758s, waiting for 1m20s May 12 17:25:21.487: INFO: node status heartbeat is unchanged for 4.000504579s, waiting for 1m20s May 12 17:25:22.487: INFO: node status heartbeat is unchanged for 5.000337006s, waiting for 1m20s May 12 17:25:23.487: INFO: node status heartbeat is unchanged for 6.000463438s, waiting for 1m20s May 12 17:25:24.487: INFO: node status heartbeat is unchanged for 7.000325802s, waiting for 1m20s May 12 17:25:25.488: INFO: node status heartbeat is unchanged for 8.000581587s, waiting for 1m20s May 12 17:25:26.487: INFO: node status heartbeat is unchanged for 8.999985687s, waiting for 1m20s May 12 17:25:27.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:25:27.490: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:25:28.487: INFO: node status heartbeat is unchanged for 1.000351902s, waiting for 1m20s May 12 17:25:29.486: INFO: node status heartbeat is unchanged for 1.999225492s, waiting for 1m20s May 12 17:25:30.488: INFO: node status heartbeat is unchanged for 3.000979483s, waiting for 1m20s May 12 17:25:31.487: INFO: node status heartbeat is unchanged for 4.000552303s, waiting for 1m20s May 12 17:25:32.487: INFO: node status heartbeat is unchanged for 4.999674129s, waiting for 1m20s May 12 17:25:33.487: INFO: node status heartbeat is unchanged for 6.000017378s, waiting for 1m20s May 12 17:25:34.486: INFO: node status heartbeat is unchanged for 6.999029729s, waiting for 1m20s May 12 17:25:35.489: INFO: node status heartbeat is unchanged for 8.001721657s, waiting for 1m20s May 12 17:25:36.486: INFO: node status heartbeat is unchanged for 8.999629791s, waiting for 1m20s May 12 17:25:37.487: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:25:37.489: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:25:38.487: INFO: node status heartbeat is unchanged for 1.000382022s, waiting for 1m20s May 12 17:25:39.486: INFO: node status heartbeat is unchanged for 1.999458153s, waiting for 1m20s May 12 17:25:40.487: INFO: node status heartbeat is unchanged for 3.000669144s, waiting for 1m20s May 12 17:25:41.487: INFO: node status heartbeat is unchanged for 4.000536029s, waiting for 1m20s May 12 17:25:42.486: INFO: node status heartbeat is unchanged for 4.999773252s, waiting for 1m20s May 12 17:25:43.488: INFO: node status heartbeat is unchanged for 6.001157236s, waiting for 1m20s May 12 17:25:44.487: INFO: node status heartbeat is unchanged for 7.000187336s, waiting for 1m20s May 12 17:25:45.487: INFO: node status heartbeat is unchanged for 8.000093178s, waiting for 1m20s May 12 17:25:46.487: INFO: node status heartbeat is unchanged for 9.000872643s, waiting for 1m20s May 12 17:25:47.489: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 12 17:25:47.492: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:14 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-12 17:25:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-12 16:32:43 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-12 16:35:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 12 17:25:48.489: INFO: node status heartbeat is unchanged for 1.00101396s, waiting for 1m20s May 12 17:25:49.487: INFO: node status heartbeat is unchanged for 1.998405161s, waiting for 1m20s May 12 17:25:50.487: INFO: node status heartbeat is unchanged for 2.998036502s, waiting for 1m20s May 12 17:25:51.488: INFO: node status heartbeat is unchanged for 3.999098186s, waiting for 1m20s May 12 17:25:52.488: INFO: node status heartbeat is unchanged for 4.999849318s, waiting for 1m20s May 12 17:25:53.487: INFO: node status heartbeat is unchanged for 5.998745497s, waiting for 1m20s May 12 17:25:54.487: INFO: node status heartbeat is unchanged for 6.99840165s, waiting for 1m20s May 12 17:25:55.487: INFO: node status heartbeat is unchanged for 7.998401037s, waiting for 1m20s May 12 17:25:55.489: INFO: node status heartbeat is unchanged for 8.000600409s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:25:55.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6336" for this suite. • [SLOW TEST:300.052 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":590,"failed":0} May 12 17:25:55.511: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:52.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 STEP: getting restart delay-0 May 12 17:22:49.712: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-05-12 17:22:06 +0000 UTC restartedAt=2021-05-12 17:22:48 +0000 UTC (42s) STEP: getting restart delay-1 May 12 17:24:16.994: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-05-12 17:22:53 +0000 UTC restartedAt=2021-05-12 17:24:15 +0000 UTC (1m22s) STEP: getting restart delay-2 May 12 17:27:14.652: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-05-12 17:24:20 +0000 UTC restartedAt=2021-05-12 17:27:14 +0000 UTC (2m54s) STEP: updating the image May 12 17:27:15.162: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 12 17:27:40.228: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-12 17:27:25 +0000 UTC restartedAt=2021-05-12 17:27:39 +0000 UTC (14s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 17:27:40.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-705" for this suite. • [SLOW TEST:407.746 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":151,"failed":0} May 12 17:27:40.239: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 17:20:46.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 STEP: getting restart delay when capped May 12 17:32:30.370: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-05-12 17:27:27 +0000 UTC restartedAt=2021-05-12 17:32:29 +0000 UTC (5m2s) May 12 17:45:31.316: FAIL: timed out waiting for container restart in pod=back-off-cap/back-off-cap Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func18.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:750 +0x4e5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001fbf380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001fbf380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001fbf380, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "pods-482". STEP: Found 12 events. May 12 17:45:31.321: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for back-off-cap: { } Scheduled: Successfully assigned pods-482/back-off-cap to node2 May 12 17:45:31.321: INFO: At 2021-05-12 17:20:51 +0000 UTC - event for back-off-cap: {multus } AddedInterface: Add eth0 [10.244.4.25/24] May 12 17:45:31.321: INFO: At 2021-05-12 17:20:52 +0000 UTC - event for back-off-cap: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 12 17:45:31.321: INFO: At 2021-05-12 17:20:55 +0000 UTC - event for back-off-cap: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 3.609060519s May 12 17:45:31.321: INFO: At 2021-05-12 17:20:55 +0000 UTC - event for back-off-cap: {kubelet node2} Created: Created container back-off-cap May 12 17:45:31.321: INFO: At 2021-05-12 17:20:55 +0000 UTC - event for back-off-cap: {kubelet node2} Started: Started container back-off-cap May 12 17:45:31.321: INFO: At 2021-05-12 17:21:05 +0000 UTC - event for back-off-cap: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 2.847780016s May 12 17:45:31.321: INFO: At 2021-05-12 17:21:11 +0000 UTC - event for back-off-cap: {kubelet node2} BackOff: Back-off restarting failed container May 12 17:45:31.321: INFO: At 2021-05-12 17:21:26 +0000 UTC - event for back-off-cap: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.335268233s May 12 17:45:31.321: INFO: At 2021-05-12 17:21:59 +0000 UTC - event for back-off-cap: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.283565337s May 12 17:45:31.321: INFO: At 2021-05-12 17:22:46 +0000 UTC - event for back-off-cap: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.342128011s May 12 17:45:31.321: INFO: At 2021-05-12 17:38:02 +0000 UTC - event for back-off-cap: {kubelet node2} Failed: Error: ImagePullBackOff May 12 17:45:31.323: INFO: POD NODE PHASE GRACE CONDITIONS May 12 17:45:31.323: INFO: back-off-cap node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-12 17:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-12 17:32:34 +0000 UTC ContainersNotReady containers with unready status: [back-off-cap]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-12 17:32:34 +0000 UTC ContainersNotReady containers with unready status: [back-off-cap]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-12 17:20:46 +0000 UTC }] May 12 17:45:31.323: INFO: May 12 17:45:31.329: INFO: Logging node info for node master1 May 12 17:45:31.332: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 03e64d13-444c-41eb-b6bd-3745f05cd1cd 20116 0 2021-05-12 16:30:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:94:4b:fc:cf:bc"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:30:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-12 16:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:27 +0000 UTC,LastTransitionTime:2021-05-12 16:35:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:30 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:30 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:30 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 17:45:30 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:201a06a0695b47fdbdd7df0fdd94f4dc,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a7dddfd4-eda7-4a08-9750-d4a3c87c0cdd,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 17:45:31.332: INFO: Logging kubelet events for node master1 May 12 17:45:31.335: INFO: Logging pods the kubelet thinks is on node master1 May 12 17:45:31.363: INFO: coredns-7677f9bb54-4lkxs started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container coredns ready: true, restart count 2 May 12 17:45:31.363: INFO: docker-registry-docker-registry-56cbc7bc58-lfjnt started at 2021-05-12 16:36:16 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.363: INFO: Container docker-registry ready: true, restart count 0 May 12 17:45:31.363: INFO: Container nginx ready: true, restart count 0 May 12 17:45:31.363: INFO: kube-apiserver-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-apiserver ready: true, restart count 0 May 12 17:45:31.363: INFO: kube-scheduler-master1 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-scheduler ready: true, restart count 0 May 12 17:45:31.363: INFO: kube-proxy-v5zpq started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-proxy ready: true, restart count 2 May 12 17:45:31.363: INFO: kube-flannel-rc2gh started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 17:45:31.363: INFO: Init container install-cni ready: true, restart count 2 May 12 17:45:31.363: INFO: Container kube-flannel ready: true, restart count 3 May 12 17:45:31.363: INFO: kube-controller-manager-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 17:45:31.363: INFO: kube-multus-ds-amd64-rcwwm started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-multus ready: true, restart count 1 May 12 17:45:31.363: INFO: node-exporter-wd9j2 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.363: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.363: INFO: Container node-exporter ready: true, restart count 0 W0512 17:45:31.377566 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 17:45:31.411: INFO: Latency metrics for node master1 May 12 17:45:31.411: INFO: Logging node info for node master2 May 12 17:45:31.414: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 f3d8b0a9-b7ca-4942-bfc7-69b182fddff8 20110 0 2021-05-12 16:31:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:9d:92:dd:24:1d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:36 +0000 UTC,LastTransitionTime:2021-05-12 16:36:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:33:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ef3726c355844802be21d744dd831d84,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:6feec048-83de-4ff4-ae36-12985bcda218,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 17:45:31.414: INFO: Logging kubelet events for node master2 May 12 17:45:31.417: INFO: Logging pods the kubelet thinks is on node master2 May 12 17:45:31.432: INFO: kube-flannel-pppch started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 17:45:31.432: INFO: Init container install-cni ready: true, restart count 2 May 12 17:45:31.432: INFO: Container kube-flannel ready: true, restart count 1 May 12 17:45:31.432: INFO: kube-multus-ds-amd64-4swdk started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.432: INFO: Container kube-multus ready: true, restart count 1 May 12 17:45:31.432: INFO: coredns-7677f9bb54-b628b started at 2021-05-12 16:33:48 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.432: INFO: Container coredns ready: true, restart count 1 May 12 17:45:31.432: INFO: node-exporter-dprww started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.432: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.432: INFO: Container node-exporter ready: true, restart count 0 May 12 17:45:31.432: INFO: kube-apiserver-master2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.432: INFO: Container kube-apiserver ready: true, restart count 0 May 12 17:45:31.432: INFO: kube-controller-manager-master2 started at 2021-05-12 16:38:53 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.432: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 17:45:31.432: INFO: kube-scheduler-master2 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.433: INFO: Container kube-scheduler ready: true, restart count 2 May 12 17:45:31.433: INFO: kube-proxy-6ljsd started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.433: INFO: Container kube-proxy ready: true, restart count 2 W0512 17:45:31.447039 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 17:45:31.471: INFO: Latency metrics for node master2 May 12 17:45:31.471: INFO: Logging node info for node master3 May 12 17:45:31.474: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 a0124635-fd55-4774-af26-32765b659025 20109 0 2021-05-12 16:31:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8e:91:ce:de:d3:10"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-12 16:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:42 +0000 UTC,LastTransitionTime:2021-05-12 16:35:42 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 17:45:29 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8966dda31ac243c3b9142da7eb1a3315,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:29bf94f6-9eaa-4435-8e2a-f17b4721b8df,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 17:45:31.474: INFO: Logging kubelet events for node master3 May 12 17:45:31.476: INFO: Logging pods the kubelet thinks is on node master3 May 12 17:45:31.492: INFO: kube-apiserver-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.492: INFO: Container kube-apiserver ready: true, restart count 0 May 12 17:45:31.492: INFO: kube-controller-manager-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.492: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 17:45:31.492: INFO: kube-multus-ds-amd64-4hlf8 started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.493: INFO: Container kube-multus ready: true, restart count 1 May 12 17:45:31.493: INFO: prometheus-operator-5bb8cb9d8f-sqgl4 started at 2021-05-12 16:42:55 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.493: INFO: Container prometheus-operator ready: true, restart count 0 May 12 17:45:31.493: INFO: node-exporter-jzp84 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.493: INFO: Container node-exporter ready: true, restart count 0 May 12 17:45:31.493: INFO: kube-scheduler-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.493: INFO: Container kube-scheduler ready: true, restart count 3 May 12 17:45:31.493: INFO: kube-proxy-d57dm started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.493: INFO: Container kube-proxy ready: true, restart count 1 May 12 17:45:31.493: INFO: kube-flannel-hmbzn started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 17:45:31.493: INFO: Init container install-cni ready: true, restart count 2 May 12 17:45:31.493: INFO: Container kube-flannel ready: true, restart count 2 May 12 17:45:31.493: INFO: dns-autoscaler-5b7b5c9b6f-8fzpk started at 2021-05-12 16:33:51 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.493: INFO: Container autoscaler ready: true, restart count 1 May 12 17:45:31.493: INFO: node-feature-discovery-controller-5bf5c49849-gdz9z started at 2021-05-12 16:38:58 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.493: INFO: Container nfd-controller ready: true, restart count 0 W0512 17:45:31.506143 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 17:45:31.536: INFO: Latency metrics for node master3 May 12 17:45:31.536: INFO: Logging node info for node node1 May 12 17:45:31.539: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 2f7b1644-6865-412d-b423-194903813633 20098 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"6a:b5:62:81:d9:8e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-12 16:41:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:30 +0000 UTC,LastTransitionTime:2021-05-12 16:36:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:24 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:24 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:24 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 17:45:24 +0000 UTC,LastTransitionTime:2021-05-12 16:33:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9a9ef88c91340689744923e9951f78a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:d2f244db-a37b-4dba-a8f9-6ea24c6cdb4e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[@ :],SizeBytes:1002488025,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 17:45:31.540: INFO: Logging kubelet events for node node1 May 12 17:45:31.542: INFO: Logging pods the kubelet thinks is on node node1 May 12 17:45:31.560: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 17:45:31.560: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 17:45:31.560: INFO: cmk-v4qwz started at 2021-05-12 16:42:07 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.560: INFO: Container nodereport ready: true, restart count 0 May 12 17:45:31.560: INFO: Container reconcile ready: true, restart count 0 May 12 17:45:31.560: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 17:45:31.560: INFO: cmk-init-discover-node1-2x2zk started at 2021-05-12 16:41:25 +0000 UTC (0+3 container statuses recorded) May 12 17:45:31.560: INFO: Container discover ready: false, restart count 0 May 12 17:45:31.560: INFO: Container init ready: false, restart count 0 May 12 17:45:31.560: INFO: Container install ready: false, restart count 0 May 12 17:45:31.560: INFO: prometheus-k8s-0 started at 2021-05-12 16:43:20 +0000 UTC (0+5 container statuses recorded) May 12 17:45:31.560: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 17:45:31.560: INFO: Container grafana ready: true, restart count 0 May 12 17:45:31.560: INFO: Container prometheus ready: true, restart count 1 May 12 17:45:31.560: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 17:45:31.560: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 17:45:31.560: INFO: collectd-5mpmz started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 17:45:31.560: INFO: Container collectd ready: true, restart count 0 May 12 17:45:31.560: INFO: Container collectd-exporter ready: true, restart count 0 May 12 17:45:31.560: INFO: Container rbac-proxy ready: true, restart count 0 May 12 17:45:31.560: INFO: nginx-proxy-node1 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container nginx-proxy ready: true, restart count 2 May 12 17:45:31.560: INFO: kube-proxy-r9vsx started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container kube-proxy ready: true, restart count 1 May 12 17:45:31.560: INFO: node-feature-discovery-worker-qtn84 started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.560: INFO: Container nfd-worker ready: true, restart count 0 May 12 17:45:31.561: INFO: node-exporter-ddxbd started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.561: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.561: INFO: Container node-exporter ready: true, restart count 0 May 12 17:45:31.561: INFO: kube-flannel-r7w6z started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 17:45:31.561: INFO: Init container install-cni ready: true, restart count 1 May 12 17:45:31.561: INFO: Container kube-flannel ready: true, restart count 2 May 12 17:45:31.561: INFO: kube-multus-ds-amd64-fhzwc started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.561: INFO: Container kube-multus ready: true, restart count 1 W0512 17:45:31.574835 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 17:45:31.606: INFO: Latency metrics for node node1 May 12 17:45:31.606: INFO: Logging node info for node node2 May 12 17:45:31.609: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 16e4ef1f-0c51-44d5-a257-2eaa348c0d52 20089 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:9a:ee:23:4c:ac"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-12 16:41:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:22 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:22 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 17:45:22 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 17:45:22 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eebaf4858fee4a739009f2f7f2717953,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:15bd8119-98f5-4e13-a2fa-897c2224dd6e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 17:45:31.610: INFO: Logging kubelet events for node node2 May 12 17:45:31.612: INFO: Logging pods the kubelet thinks is on node node2 May 12 17:45:31.629: INFO: cmk-init-discover-node2-qrd9v started at 2021-05-12 16:41:44 +0000 UTC (0+3 container statuses recorded) May 12 17:45:31.629: INFO: Container discover ready: false, restart count 0 May 12 17:45:31.629: INFO: Container init ready: false, restart count 0 May 12 17:45:31.629: INFO: Container install ready: false, restart count 0 May 12 17:45:31.629: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-j6r4r started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 17:45:31.629: INFO: cmk-gbw5d started at 2021-05-12 16:42:08 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.629: INFO: Container nodereport ready: true, restart count 0 May 12 17:45:31.629: INFO: Container reconcile ready: true, restart count 0 May 12 17:45:31.629: INFO: node-exporter-h5rv7 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.629: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 17:45:31.629: INFO: Container node-exporter ready: true, restart count 0 May 12 17:45:31.629: INFO: kube-proxy-grtqc started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container kube-proxy ready: true, restart count 2 May 12 17:45:31.629: INFO: kube-multus-ds-amd64-k28rf started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container kube-multus ready: true, restart count 1 May 12 17:45:31.629: INFO: node-feature-discovery-worker-zjvzk started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container nfd-worker ready: true, restart count 0 May 12 17:45:31.629: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8xc75 started at 2021-05-12 16:46:05 +0000 UTC (0+2 container statuses recorded) May 12 17:45:31.629: INFO: Container tas-controller ready: true, restart count 0 May 12 17:45:31.629: INFO: Container tas-extender ready: true, restart count 0 May 12 17:45:31.629: INFO: collectd-tng6x started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 17:45:31.629: INFO: Container collectd ready: true, restart count 0 May 12 17:45:31.629: INFO: Container collectd-exporter ready: true, restart count 0 May 12 17:45:31.629: INFO: Container rbac-proxy ready: true, restart count 0 May 12 17:45:31.629: INFO: back-off-cap started at 2021-05-12 17:20:46 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container back-off-cap ready: false, restart count 7 May 12 17:45:31.629: INFO: nginx-proxy-node2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container nginx-proxy ready: true, restart count 2 May 12 17:45:31.629: INFO: kube-flannel-rqtcs started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 17:45:31.629: INFO: Init container install-cni ready: true, restart count 2 May 12 17:45:31.629: INFO: Container kube-flannel ready: true, restart count 1 May 12 17:45:31.629: INFO: cmk-webhook-6c9d5f8578-fgcvr started at 2021-05-12 16:42:08 +0000 UTC (0+1 container statuses recorded) May 12 17:45:31.629: INFO: Container cmk-webhook ready: true, restart count 0 W0512 17:45:31.642508 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 17:45:31.670: INFO: Latency metrics for node node2 May 12 17:45:31.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-482" for this suite. • Failure [1484.751 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 May 12 17:45:31.316: timed out waiting for container restart in pod=back-off-cap/back-off-cap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:750 ------------------------------ {"msg":"FAILED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":0,"skipped":377,"failed":1,"failures":["[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]"]} May 12 17:45:31.688: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":4,"skipped":454,"failed":0} May 12 17:22:17.838: INFO: Running AfterSuite actions on all nodes May 12 17:45:31.753: INFO: Running AfterSuite actions on node 1 May 12 17:45:31.753: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [k8s.io] Pods [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:750 Ran 30 of 5484 Specs in 1485.959 seconds FAIL! -- 29 Passed | 1 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 24m47.392102352s Test Suite Failed