Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622234976 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 28 20:49:38.427: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.429: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 28 20:49:38.453: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 28 20:49:38.520: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 28 20:49:38.520: INFO: The status of Pod cmk-init-discover-node2-95cbr is Succeeded, skipping waiting May 28 20:49:38.520: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 28 20:49:38.520: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 28 20:49:38.520: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 28 20:49:38.539: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 28 20:49:38.539: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 28 20:49:38.539: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 28 20:49:38.539: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 28 20:49:38.539: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 28 20:49:38.539: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 28 20:49:38.539: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 28 20:49:38.539: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 28 20:49:38.539: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 28 20:49:38.539: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 28 20:49:38.539: INFO: e2e test version: v1.19.11 May 28 20:49:38.540: INFO: kube-apiserver version: v1.19.8 May 28 20:49:38.541: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.546: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ May 28 20:49:38.551: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.571: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 28 20:49:38.557: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.578: INFO: Cluster IP family: ipv4 S ------------------------------ May 28 20:49:38.557: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.579: INFO: Cluster IP family: ipv4 S ------------------------------ May 28 20:49:38.558: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.579: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 28 20:49:38.562: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.582: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 28 20:49:38.567: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.592: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ May 28 20:49:38.573: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.596: INFO: Cluster IP family: ipv4 S ------------------------------ May 28 20:49:38.575: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.597: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 28 20:49:38.584: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:38.605: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test May 28 20:49:38.660: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.664: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:38.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-640" for this suite. •SSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling May 28 20:49:38.740: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.742: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:38.744: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:38.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9424" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:38.755082 34 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 202 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0017d0750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00092a8e0, 0xc0017d0750, 0xc00092a8e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0017d0750, 0x4e0ccbdc3d3a33, 0xc0017d0778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0xcb, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0042167e0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00129de60, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00129de60, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00034f620, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0017d16c0, 0xc003f75e00, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003f75e00, 0x0, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003f75e00, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001914000, 0xc003f75e00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001914000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001914000, 0xc000618030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f516f89fb50, 0xc003c6c480, 0x4c239b8, 0x14, 0xc0036542d0, 0x3, 0x3, 0x53981a0, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc003c6c480, 0x4c239b8, 0x14, 0xc003435400, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc003c6c480, 0x4c239b8, 0x14, 0xc000f8d900, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003c6c480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc003c6c480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc003c6c480, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl May 28 20:49:38.812: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.814: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:38.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-4277" for this suite. •SSSS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 May 28 20:49:39.011: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:39.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-8398" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:39.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd May 28 20:49:39.094: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:39.096: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 May 28 20:49:39.098: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:39.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-5729" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:44.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5618" for this suite. • [SLOW TEST:6.043 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":1,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 28 20:49:38.730: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.732: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:45.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1117" for this suite. • [SLOW TEST:7.078 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:46.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:46.033: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:46.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-7863" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:46.041750 31 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 220 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00413a750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001d82080, 0xc00413a750, 0xc001d82080, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00413a750, 0x4e0ccd8e8dd8a0, 0xc00413a778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0x97, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc001942c30, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001437980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001437980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000696b58, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00413b6c0, 0xc0039a1d10, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0039a1d10, 0x0, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0039a1d10, 0x52e3180, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001b5a000, 0xc0039a1d10, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001b5a000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001b5a000, 0xc0040b0060) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f1270065a40, 0xc001a9c000, 0x4c239b8, 0x14, 0xc0030136e0, 0x3, 0x3, 0x53981a0, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc001a9c000, 0x4c239b8, 0x14, 0xc000e3a400, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc001a9c000, 0x4c239b8, 0x14, 0xc002f91ca0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a9c000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a9c000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a9c000, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl May 28 20:49:38.706: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.708: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3635" for this suite. • [SLOW TEST:8.083 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":1,"skipped":17,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:46.786: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:46.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2798" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:46.799292 37 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 322 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc000222078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002186750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0015a4640, 0xc002186750, 0xc0015a4640, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002186750, 0x4e0ccdbbb5f5c8, 0xc002186778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0xa2, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00052bec0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00120e338, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0021876c0, 0xc002bc90e0, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002bc90e0, 0x0, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002bc90e0, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003ed2000, 0xc002bc90e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003ed2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003ed2000, 0xc003ec8030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000244230, 0x7f412c41aa60, 0xc00273c180, 0x4c239b8, 0x14, 0xc0024e5c20, 0x3, 0x3, 0x53981a0, 0xc0002608c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc003a1af40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc002731b60, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00273c180, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 28 20:49:38.802: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.804: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:46.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-206" for this suite. • [SLOW TEST:8.079 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:39.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 May 28 20:49:39.254: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2635" to be "Succeeded or Failed" May 28 20:49:39.256: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270131ms May 28 20:49:41.260: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006370005s May 28 20:49:43.266: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012100413s May 28 20:49:45.269: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015133129s May 28 20:49:47.272: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018818803s May 28 20:49:47.272: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:47.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2635" for this suite. • [SLOW TEST:8.067 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:47.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:47.419: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:47.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6299" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:47.429564 30 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 161 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001620d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004324750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002e79b60, 0xc004324750, 0xc002e79b60, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004324750, 0x4e0ccde1474992, 0xc004324778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0xc5, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002d33bf0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00187d9e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00187d9e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000389b00, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0043256c0, 0xc00278fef0, 0x52e3180, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00278fef0, 0x0, 0x52e3180, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00278fef0, 0x52e3180, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003fae000, 0xc00278fef0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003fae000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003fae000, 0xc003fa4030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016c280, 0x7f53e34c9720, 0xc001683380, 0x4c239b8, 0x14, 0xc0011d0300, 0x3, 0x3, 0x53981a0, 0xc000160900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc001683380, 0x4c239b8, 0x14, 0xc000bf61c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc001683380, 0x4c239b8, 0x14, 0xc00174f620, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001683380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001683380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001683380, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:39.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples May 28 20:49:39.076: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:39.078: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 28 20:49:39.086: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod May 28 20:49:39.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8676 create -f -' May 28 20:49:39.621: INFO: stderr: "" May 28 20:49:39.621: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 28 20:49:49.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8676 logs dapi-test-pod test-container' May 28 20:49:49.776: INFO: stderr: "" May 28 20:49:49.776: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8676\nMY_POD_IP=10.244.3.27\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 28 20:49:49.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8676 logs dapi-test-pod test-container' May 28 20:49:49.927: INFO: stderr: "" May 28 20:49:49.927: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8676\nMY_POD_IP=10.244.3.27\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:49.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-8676" for this suite. • [SLOW TEST:10.888 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:39.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 28 20:49:39.300: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod May 28 20:49:39.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7357 create -f -' May 28 20:49:39.687: INFO: stderr: "" May 28 20:49:39.687: INFO: stdout: "secret/test-secret created\n" May 28 20:49:39.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7357 create -f -' May 28 20:49:39.941: INFO: stderr: "" May 28 20:49:39.941: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 28 20:49:49.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7357 logs secret-test-pod test-container' May 28 20:49:50.115: INFO: stderr: "" May 28 20:49:50.115: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7357" for this suite. • [SLOW TEST:10.847 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:40.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:50.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6590" for this suite. • [SLOW TEST:10.044 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ SSS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:50.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 May 28 20:49:50.288: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2619" for this suite. S [SKIPPING] [0.026 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:50.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:50.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7602" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":2,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:48.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:51.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1331" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":654,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:47.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 May 28 20:49:47.102: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-c8b66867-9c7d-40b1-affe-d572c3e2c7f9" in namespace "security-context-test-320" to be "Succeeded or Failed" May 28 20:49:47.107: INFO: Pod "busybox-readonly-true-c8b66867-9c7d-40b1-affe-d572c3e2c7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438972ms May 28 20:49:49.109: INFO: Pod "busybox-readonly-true-c8b66867-9c7d-40b1-affe-d572c3e2c7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007304799s May 28 20:49:51.112: INFO: Pod "busybox-readonly-true-c8b66867-9c7d-40b1-affe-d572c3e2c7f9": Phase="Failed", Reason="", readiness=false. Elapsed: 4.009681457s May 28 20:49:51.112: INFO: Pod "busybox-readonly-true-c8b66867-9c7d-40b1-affe-d572c3e2c7f9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:51.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-320" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:51.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:51.244: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:51.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5058" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:51.253420 37 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 322 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc000222078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002186750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0024deb40, 0xc002186750, 0xc0024deb40, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002186750, 0x4e0ccec5334d71, 0xc002186778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0x9c, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0016738c0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00120e338, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0021876c0, 0xc002bc8d20, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002bc8d20, 0x0, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002bc8d20, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003ed2000, 0xc002bc8d20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003ed2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003ed2000, 0xc003ec8030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000244230, 0x7f412c41aa60, 0xc00273c180, 0x4c239b8, 0x14, 0xc0024e5c20, 0x3, 0x3, 0x53981a0, 0xc0002608c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc003a1af40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc002731b60, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00273c180, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:51.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 28 20:49:51.395: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:51.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-3054" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0528 20:49:51.403660 37 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 322 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ebd20, 0x7540830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ebd20, 0x7540830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc000222078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002186750, 0xcb4400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00119ce20, 0xc002186750, 0xc00119ce20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002186750, 0x4e0ccece27b387, 0xc002186778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770e980, 0xae, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002038e10, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b52300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00120e338, 0x52e3180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0021876c0, 0xc002bc8c30, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002bc8c30, 0x0, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002bc8c30, 0x52e3180, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003ed2000, 0xc002bc8c30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003ed2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003ed2000, 0xc003ec8030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000244230, 0x7f412c41aa60, 0xc00273c180, 0x4c239b8, 0x14, 0xc0024e5c20, 0x3, 0x3, 0x53981a0, 0xc0002608c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc003a1af40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e7de0, 0xc00273c180, 0x4c239b8, 0x14, 0xc002731b60, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00273c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00273c180, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:39.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod May 28 20:49:39.125: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:39.127: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container May 28 20:49:53.150: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1170 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 20:49:53.150: INFO: >>> kubeConfig: /root/.kube/config May 28 20:49:53.577: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1170 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 20:49:53.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 28 20:49:54.148: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1170 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 20:49:54.148: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:54.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1170" for this suite. • [SLOW TEST:15.161 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:46.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 May 28 20:49:46.149: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779" in namespace "security-context-test-6956" to be "Succeeded or Failed" May 28 20:49:46.151: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Pending", Reason="", readiness=false. Elapsed: 1.945356ms May 28 20:49:48.154: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004313434s May 28 20:49:50.156: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007042963s May 28 20:49:52.160: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010823863s May 28 20:49:54.163: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013567255s May 28 20:49:56.167: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017388142s May 28 20:49:56.167: INFO: Pod "alpine-nnp-true-0f956f55-a7f2-4bc2-9aac-8443b10e2779" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:56.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6956" for this suite. • [SLOW TEST:10.067 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:50.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 May 28 20:49:50.553: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3103" to be "Succeeded or Failed" May 28 20:49:50.556: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 3.155348ms May 28 20:49:52.559: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005876591s May 28 20:49:54.561: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008305499s May 28 20:49:56.565: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011762147s May 28 20:49:56.565: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3103" for this suite. • [SLOW TEST:6.059 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:56.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:58.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8260" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":3,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:54.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 May 28 20:49:54.886: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-abc16d2e-89fa-42b8-a424-8810e2b576d0" in namespace "security-context-test-1276" to be "Succeeded or Failed" May 28 20:49:54.889: INFO: Pod "alpine-nnp-nil-abc16d2e-89fa-42b8-a424-8810e2b576d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.328427ms May 28 20:49:56.893: INFO: Pod "alpine-nnp-nil-abc16d2e-89fa-42b8-a424-8810e2b576d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006459208s May 28 20:49:58.895: INFO: Pod "alpine-nnp-nil-abc16d2e-89fa-42b8-a424-8810e2b576d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009173075s May 28 20:49:58.895: INFO: Pod "alpine-nnp-nil-abc16d2e-89fa-42b8-a424-8810e2b576d0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:49:58.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1276" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:38.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 28 20:49:38.984: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 20:49:38.985: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:01.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6458" for this suite. • [SLOW TEST:22.076 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:56.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:02.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-738" for this suite. • [SLOW TEST:6.081 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":4,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:58.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:04.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-526" for this suite. • [SLOW TEST:6.071 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":4,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 28 20:50:04.853: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:59.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 May 28 20:49:59.389: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49" in namespace "security-context-test-5169" to be "Succeeded or Failed" May 28 20:49:59.391: INFO: Pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49": Phase="Pending", Reason="", readiness=false. Elapsed: 1.931902ms May 28 20:50:01.394: INFO: Pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005034593s May 28 20:50:03.397: INFO: Pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007923593s May 28 20:50:05.399: INFO: Pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01047941s May 28 20:50:05.399: INFO: Pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49" satisfied condition "Succeeded or Failed" May 28 20:50:05.405: INFO: Got logs for pod "busybox-privileged-true-b943a623-4949-4d4f-a19f-7125bbe5bc49": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:05.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5169" for this suite. • [SLOW TEST:6.056 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":749,"failed":0} May 28 20:50:05.416: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:50:01.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 May 28 20:50:01.630: INFO: Waiting up to 5m0s for pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7" in namespace "security-context-test-5224" to be "Succeeded or Failed" May 28 20:50:01.634: INFO: Pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397075ms May 28 20:50:03.637: INFO: Pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007433122s May 28 20:50:05.640: INFO: Pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010573502s May 28 20:50:07.643: INFO: Pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013733166s May 28 20:50:07.643: INFO: Pod "busybox-user-0-2229816f-9549-41d8-b391-b949ce3a67c7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:07.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5224" for this suite. • [SLOW TEST:6.057 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":446,"failed":0} May 28 20:50:07.654: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:50:03.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 28 20:50:08.323: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:08.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8290" for this suite. • [SLOW TEST:5.073 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":5,"skipped":1251,"failed":0} May 28 20:50:08.341: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:50.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-d452f32b-83aa-4229-9c75-c110bdbde29f in namespace container-probe-4227 May 28 20:49:56.640: INFO: Started pod liveness-d452f32b-83aa-4229-9c75-c110bdbde29f in namespace container-probe-4227 STEP: checking the pod's current state and verifying that restartCount is present May 28 20:49:56.642: INFO: Initial restart count of pod liveness-d452f32b-83aa-4229-9c75-c110bdbde29f is 0 May 28 20:50:12.673: INFO: Restart count of pod container-probe-4227/liveness-d452f32b-83aa-4229-9c75-c110bdbde29f is now 1 (16.030588684s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:50:12.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4227" for this suite. • [SLOW TEST:22.085 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":507,"failed":0} May 28 20:50:12.690: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:50.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 May 28 20:49:50.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6813 create -f -' May 28 20:49:51.291: INFO: stderr: "" May 28 20:49:51.291: INFO: stdout: "pod/liveness-exec created\n" May 28 20:49:51.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6813 create -f -' May 28 20:49:51.584: INFO: stderr: "" May 28 20:49:51.584: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 28 20:49:57.591: INFO: Pod: liveness-exec, restart count:0 May 28 20:49:57.591: INFO: Pod: liveness-http, restart count:0 May 28 20:49:59.594: INFO: Pod: liveness-http, restart count:0 May 28 20:49:59.594: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:01.597: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:01.597: INFO: Pod: liveness-http, restart count:0 May 28 20:50:03.600: INFO: Pod: liveness-http, restart count:0 May 28 20:50:03.600: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:05.603: INFO: Pod: liveness-http, restart count:0 May 28 20:50:05.603: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:07.606: INFO: Pod: liveness-http, restart count:0 May 28 20:50:07.606: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:09.609: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:09.609: INFO: Pod: liveness-http, restart count:0 May 28 20:50:11.612: INFO: Pod: liveness-http, restart count:0 May 28 20:50:11.613: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:13.616: INFO: Pod: liveness-http, restart count:0 May 28 20:50:13.616: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:15.619: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:15.619: INFO: Pod: liveness-http, restart count:0 May 28 20:50:17.622: INFO: Pod: liveness-http, restart count:0 May 28 20:50:17.622: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:19.625: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:19.625: INFO: Pod: liveness-http, restart count:0 May 28 20:50:21.628: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:21.628: INFO: Pod: liveness-http, restart count:0 May 28 20:50:23.631: INFO: Pod: liveness-http, restart count:0 May 28 20:50:23.631: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:25.635: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:25.635: INFO: Pod: liveness-http, restart count:0 May 28 20:50:27.638: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:27.638: INFO: Pod: liveness-http, restart count:0 May 28 20:50:29.641: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:29.641: INFO: Pod: liveness-http, restart count:0 May 28 20:50:31.644: INFO: Pod: liveness-http, restart count:0 May 28 20:50:31.644: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:33.647: INFO: Pod: liveness-http, restart count:0 May 28 20:50:33.647: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:35.650: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:35.651: INFO: Pod: liveness-http, restart count:1 May 28 20:50:35.651: INFO: Saw liveness-http restart, succeeded... May 28 20:50:37.653: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:39.656: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:41.659: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:43.664: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:45.667: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:47.670: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:49.673: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:51.677: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:53.681: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:55.684: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:57.688: INFO: Pod: liveness-exec, restart count:0 May 28 20:50:59.691: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:01.694: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:03.698: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:05.702: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:07.705: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:09.709: INFO: Pod: liveness-exec, restart count:0 May 28 20:51:11.713: INFO: Pod: liveness-exec, restart count:1 May 28 20:51:11.713: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:51:11.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6813" for this suite. • [SLOW TEST:80.848 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":3,"skipped":660,"failed":0} May 28 20:51:11.724: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:51.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 STEP: getting restart delay-0 May 28 20:51:02.186: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-05-28 20:50:33 +0000 UTC restartedAt=2021-05-28 20:51:01 +0000 UTC (28s) STEP: getting restart delay-1 May 28 20:51:57.389: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-05-28 20:51:06 +0000 UTC restartedAt=2021-05-28 20:51:57 +0000 UTC (51s) STEP: getting restart delay-2 May 28 20:53:32.714: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-05-28 20:52:02 +0000 UTC restartedAt=2021-05-28 20:53:32 +0000 UTC (1m30s) STEP: updating the image May 28 20:53:33.226: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 28 20:53:56.279: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-05-28 20:53:43 +0000 UTC restartedAt=2021-05-28 20:53:55 +0000 UTC (12s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:53:56.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5975" for this suite. • [SLOW TEST:245.152 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:51.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-59a85c04-f5c1-46db-8d87-171534171047 in namespace container-probe-8446 May 28 20:49:59.460: INFO: Started pod liveness-59a85c04-f5c1-46db-8d87-171534171047 in namespace container-probe-8446 STEP: checking the pod's current state and verifying that restartCount is present May 28 20:49:59.463: INFO: Initial restart count of pod liveness-59a85c04-f5c1-46db-8d87-171534171047 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:53:59.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8446" for this suite. • [SLOW TEST:248.487 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":3,"skipped":289,"failed":0} May 28 20:53:59.913: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:47.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready May 28 20:49:47.573: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 28 20:49:48.585: INFO: node status heartbeat is unchanged for 1.003660633s, waiting for 1m20s May 28 20:49:49.585: INFO: node status heartbeat is unchanged for 2.003624599s, waiting for 1m20s May 28 20:49:50.584: INFO: node status heartbeat is unchanged for 3.002824517s, waiting for 1m20s May 28 20:49:51.585: INFO: node status heartbeat is unchanged for 4.003562471s, waiting for 1m20s May 28 20:49:52.584: INFO: node status heartbeat is unchanged for 5.002966236s, waiting for 1m20s May 28 20:49:53.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:49:53.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    NodeInfo: v1.NodeSystemInfo{MachineID: "b2730c4b09814ab9a78e7bc62c820fbb", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "f1459072-d21d-46de-a5d9-46ec9349aae0", KernelVersion: "3.10.0-1160.25.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"},    Images: []v1.ContainerImage{    ... // 13 identical elements    {Names: []string{"k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b", "k8s.gcr.io/kube-scheduler:v1.19.8"}, SizeBytes: 46510430},    {Names: []string{"localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44392623}, +  { +  Names: []string{ +  "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", +  "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", +  }, +  SizeBytes: 42321438, +  },    {Names: []string{"quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee", "quay.io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477},    {Names: []string{"localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388", "localhost:30500/tas-controller:0.1"}, SizeBytes: 22922439},    ... // 3 identical elements    {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478},    {Names: []string{"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb", "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", +  "busybox:1.29", +  }, +  SizeBytes: 1154361, +  },    {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369},    {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696},    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } May 28 20:49:54.585: INFO: node status heartbeat is unchanged for 1.000314043s, waiting for 1m20s May 28 20:49:55.585: INFO: node status heartbeat is unchanged for 2.000087238s, waiting for 1m20s May 28 20:49:56.584: INFO: node status heartbeat is unchanged for 2.999765207s, waiting for 1m20s May 28 20:49:57.584: INFO: node status heartbeat is unchanged for 3.999741325s, waiting for 1m20s May 28 20:49:58.585: INFO: node status heartbeat is unchanged for 5.000290008s, waiting for 1m20s May 28 20:49:59.584: INFO: node status heartbeat is unchanged for 5.999859541s, waiting for 1m20s May 28 20:50:00.584: INFO: node status heartbeat is unchanged for 6.999819901s, waiting for 1m20s May 28 20:50:01.584: INFO: node status heartbeat is unchanged for 7.999578718s, waiting for 1m20s May 28 20:50:02.585: INFO: node status heartbeat is unchanged for 9.00093728s, waiting for 1m20s May 28 20:50:03.584: INFO: node status heartbeat is unchanged for 9.999983616s, waiting for 1m20s May 28 20:50:04.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:04.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:49:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:50:05.585: INFO: node status heartbeat is unchanged for 1.000207746s, waiting for 1m20s May 28 20:50:06.585: INFO: node status heartbeat is unchanged for 2.000138428s, waiting for 1m20s May 28 20:50:07.585: INFO: node status heartbeat is unchanged for 3.000239187s, waiting for 1m20s May 28 20:50:08.584: INFO: node status heartbeat is unchanged for 3.999926926s, waiting for 1m20s May 28 20:50:09.585: INFO: node status heartbeat is unchanged for 5.000401456s, waiting for 1m20s May 28 20:50:10.584: INFO: node status heartbeat is unchanged for 5.999381839s, waiting for 1m20s May 28 20:50:11.584: INFO: node status heartbeat is unchanged for 6.999832977s, waiting for 1m20s May 28 20:50:12.586: INFO: node status heartbeat is unchanged for 8.001683712s, waiting for 1m20s May 28 20:50:13.585: INFO: node status heartbeat is unchanged for 9.000186281s, waiting for 1m20s May 28 20:50:14.588: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:14.590: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:50:15.587: INFO: node status heartbeat is unchanged for 999.236444ms, waiting for 1m20s May 28 20:50:16.586: INFO: node status heartbeat is unchanged for 1.998322023s, waiting for 1m20s May 28 20:50:17.585: INFO: node status heartbeat is unchanged for 2.996761437s, waiting for 1m20s May 28 20:50:18.585: INFO: node status heartbeat is unchanged for 3.997215327s, waiting for 1m20s May 28 20:50:19.585: INFO: node status heartbeat is unchanged for 4.997035749s, waiting for 1m20s May 28 20:50:20.585: INFO: node status heartbeat is unchanged for 5.997032119s, waiting for 1m20s May 28 20:50:21.585: INFO: node status heartbeat is unchanged for 6.997004402s, waiting for 1m20s May 28 20:50:22.585: INFO: node status heartbeat is unchanged for 7.997540227s, waiting for 1m20s May 28 20:50:23.585: INFO: node status heartbeat is unchanged for 8.997354482s, waiting for 1m20s May 28 20:50:24.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:24.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    NodeInfo: v1.NodeSystemInfo{MachineID: "b2730c4b09814ab9a78e7bc62c820fbb", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "f1459072-d21d-46de-a5d9-46ec9349aae0", KernelVersion: "3.10.0-1160.25.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"},    Images: []v1.ContainerImage{    ... // 20 identical elements    {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814},    {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, +  { +  Names: []string{ +  "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", +  "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", +  }, +  SizeBytes: 6757579, +  },    {Names: []string{"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb", "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0", +  "gcr.io/authenticated-image-pulling/alpine:3.7", +  }, +  SizeBytes: 4206620, +  },    {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361},    {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369},    {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696},    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } May 28 20:50:25.587: INFO: node status heartbeat is unchanged for 1.001200198s, waiting for 1m20s May 28 20:50:26.585: INFO: node status heartbeat is unchanged for 1.999912402s, waiting for 1m20s May 28 20:50:27.584: INFO: node status heartbeat is unchanged for 2.998374252s, waiting for 1m20s May 28 20:50:28.587: INFO: node status heartbeat is unchanged for 4.001305725s, waiting for 1m20s May 28 20:50:29.584: INFO: node status heartbeat is unchanged for 4.998703698s, waiting for 1m20s May 28 20:50:30.586: INFO: node status heartbeat is unchanged for 6.000144496s, waiting for 1m20s May 28 20:50:31.584: INFO: node status heartbeat is unchanged for 6.998701508s, waiting for 1m20s May 28 20:50:32.584: INFO: node status heartbeat is unchanged for 7.999113143s, waiting for 1m20s May 28 20:50:33.586: INFO: node status heartbeat is unchanged for 9.000233201s, waiting for 1m20s May 28 20:50:34.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:34.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:50:35.585: INFO: node status heartbeat is unchanged for 1.000515478s, waiting for 1m20s May 28 20:50:36.586: INFO: node status heartbeat is unchanged for 2.001156935s, waiting for 1m20s May 28 20:50:37.584: INFO: node status heartbeat is unchanged for 2.999414662s, waiting for 1m20s May 28 20:50:38.586: INFO: node status heartbeat is unchanged for 4.001189239s, waiting for 1m20s May 28 20:50:39.585: INFO: node status heartbeat is unchanged for 5.000242104s, waiting for 1m20s May 28 20:50:40.585: INFO: node status heartbeat is unchanged for 6.000217925s, waiting for 1m20s May 28 20:50:41.584: INFO: node status heartbeat is unchanged for 6.999991001s, waiting for 1m20s May 28 20:50:42.586: INFO: node status heartbeat is unchanged for 8.001285686s, waiting for 1m20s May 28 20:50:43.585: INFO: node status heartbeat is unchanged for 9.000410895s, waiting for 1m20s May 28 20:50:44.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:44.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:50:45.585: INFO: node status heartbeat is unchanged for 999.803893ms, waiting for 1m20s May 28 20:50:46.585: INFO: node status heartbeat is unchanged for 2.000150753s, waiting for 1m20s May 28 20:50:47.584: INFO: node status heartbeat is unchanged for 2.999291529s, waiting for 1m20s May 28 20:50:48.585: INFO: node status heartbeat is unchanged for 3.999661782s, waiting for 1m20s May 28 20:50:49.584: INFO: node status heartbeat is unchanged for 4.998854393s, waiting for 1m20s May 28 20:50:50.585: INFO: node status heartbeat is unchanged for 6.000035403s, waiting for 1m20s May 28 20:50:51.585: INFO: node status heartbeat is unchanged for 6.999679268s, waiting for 1m20s May 28 20:50:52.584: INFO: node status heartbeat is unchanged for 7.99911485s, waiting for 1m20s May 28 20:50:53.585: INFO: node status heartbeat is unchanged for 8.999714677s, waiting for 1m20s May 28 20:50:54.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:50:54.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:50:55.584: INFO: node status heartbeat is unchanged for 999.130111ms, waiting for 1m20s May 28 20:50:56.584: INFO: node status heartbeat is unchanged for 1.999507978s, waiting for 1m20s May 28 20:50:57.586: INFO: node status heartbeat is unchanged for 3.000927846s, waiting for 1m20s May 28 20:50:58.585: INFO: node status heartbeat is unchanged for 4.000038166s, waiting for 1m20s May 28 20:50:59.584: INFO: node status heartbeat is unchanged for 4.999136305s, waiting for 1m20s May 28 20:51:00.585: INFO: node status heartbeat is unchanged for 5.999753526s, waiting for 1m20s May 28 20:51:01.584: INFO: node status heartbeat is unchanged for 6.999429459s, waiting for 1m20s May 28 20:51:02.587: INFO: node status heartbeat is unchanged for 8.00172779s, waiting for 1m20s May 28 20:51:03.585: INFO: node status heartbeat is unchanged for 9.000046795s, waiting for 1m20s May 28 20:51:04.586: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:51:04.589: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:50:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:05.586: INFO: node status heartbeat is unchanged for 999.872747ms, waiting for 1m20s May 28 20:51:06.585: INFO: node status heartbeat is unchanged for 1.999394055s, waiting for 1m20s May 28 20:51:07.585: INFO: node status heartbeat is unchanged for 2.998575276s, waiting for 1m20s May 28 20:51:08.585: INFO: node status heartbeat is unchanged for 3.998960452s, waiting for 1m20s May 28 20:51:09.585: INFO: node status heartbeat is unchanged for 4.99857249s, waiting for 1m20s May 28 20:51:10.585: INFO: node status heartbeat is unchanged for 5.999453734s, waiting for 1m20s May 28 20:51:11.585: INFO: node status heartbeat is unchanged for 6.998954893s, waiting for 1m20s May 28 20:51:12.585: INFO: node status heartbeat is unchanged for 7.998692893s, waiting for 1m20s May 28 20:51:13.584: INFO: node status heartbeat is unchanged for 8.998070132s, waiting for 1m20s May 28 20:51:14.587: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:51:14.589: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:15.585: INFO: node status heartbeat is unchanged for 997.932687ms, waiting for 1m20s May 28 20:51:16.585: INFO: node status heartbeat is unchanged for 1.997831629s, waiting for 1m20s May 28 20:51:17.585: INFO: node status heartbeat is unchanged for 2.998056334s, waiting for 1m20s May 28 20:51:18.587: INFO: node status heartbeat is unchanged for 3.999847857s, waiting for 1m20s May 28 20:51:19.586: INFO: node status heartbeat is unchanged for 4.99874948s, waiting for 1m20s May 28 20:51:20.585: INFO: node status heartbeat is unchanged for 5.998043414s, waiting for 1m20s May 28 20:51:21.584: INFO: node status heartbeat is unchanged for 6.99710905s, waiting for 1m20s May 28 20:51:22.584: INFO: node status heartbeat is unchanged for 7.997544616s, waiting for 1m20s May 28 20:51:23.584: INFO: node status heartbeat is unchanged for 8.997620992s, waiting for 1m20s May 28 20:51:24.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:51:24.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:25.585: INFO: node status heartbeat is unchanged for 1.000451122s, waiting for 1m20s May 28 20:51:26.585: INFO: node status heartbeat is unchanged for 2.000460912s, waiting for 1m20s May 28 20:51:27.584: INFO: node status heartbeat is unchanged for 2.999688082s, waiting for 1m20s May 28 20:51:28.584: INFO: node status heartbeat is unchanged for 3.999517334s, waiting for 1m20s May 28 20:51:29.585: INFO: node status heartbeat is unchanged for 5.000157783s, waiting for 1m20s May 28 20:51:30.585: INFO: node status heartbeat is unchanged for 6.00008162s, waiting for 1m20s May 28 20:51:31.586: INFO: node status heartbeat is unchanged for 7.001199236s, waiting for 1m20s May 28 20:51:32.585: INFO: node status heartbeat is unchanged for 8.000558462s, waiting for 1m20s May 28 20:51:33.584: INFO: node status heartbeat is unchanged for 8.999321085s, waiting for 1m20s May 28 20:51:34.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:51:34.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:35.585: INFO: node status heartbeat is unchanged for 999.701459ms, waiting for 1m20s May 28 20:51:36.584: INFO: node status heartbeat is unchanged for 1.999260038s, waiting for 1m20s May 28 20:51:37.585: INFO: node status heartbeat is unchanged for 2.999615734s, waiting for 1m20s May 28 20:51:38.585: INFO: node status heartbeat is unchanged for 3.999625089s, waiting for 1m20s May 28 20:51:39.585: INFO: node status heartbeat is unchanged for 4.999751458s, waiting for 1m20s May 28 20:51:40.584: INFO: node status heartbeat is unchanged for 5.998873445s, waiting for 1m20s May 28 20:51:41.585: INFO: node status heartbeat is unchanged for 7.000136802s, waiting for 1m20s May 28 20:51:42.585: INFO: node status heartbeat is unchanged for 7.999370342s, waiting for 1m20s May 28 20:51:43.584: INFO: node status heartbeat is unchanged for 8.999042336s, waiting for 1m20s May 28 20:51:44.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:51:44.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:45.585: INFO: node status heartbeat is unchanged for 999.924359ms, waiting for 1m20s May 28 20:51:46.584: INFO: node status heartbeat is unchanged for 1.999508638s, waiting for 1m20s May 28 20:51:47.585: INFO: node status heartbeat is unchanged for 3.00027659s, waiting for 1m20s May 28 20:51:48.585: INFO: node status heartbeat is unchanged for 4.000492401s, waiting for 1m20s May 28 20:51:49.585: INFO: node status heartbeat is unchanged for 5.000212586s, waiting for 1m20s May 28 20:51:50.585: INFO: node status heartbeat is unchanged for 6.000183561s, waiting for 1m20s May 28 20:51:51.584: INFO: node status heartbeat is unchanged for 6.999071542s, waiting for 1m20s May 28 20:51:52.585: INFO: node status heartbeat is unchanged for 7.999983085s, waiting for 1m20s May 28 20:51:53.586: INFO: node status heartbeat is unchanged for 9.000764712s, waiting for 1m20s May 28 20:51:54.587: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 28 20:51:54.590: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:51:55.587: INFO: node status heartbeat is unchanged for 1.000133488s, waiting for 1m20s May 28 20:51:56.588: INFO: node status heartbeat is unchanged for 2.001232657s, waiting for 1m20s May 28 20:51:57.586: INFO: node status heartbeat is unchanged for 2.998938016s, waiting for 1m20s May 28 20:51:58.585: INFO: node status heartbeat is unchanged for 3.998037797s, waiting for 1m20s May 28 20:51:59.584: INFO: node status heartbeat is unchanged for 4.99762452s, waiting for 1m20s May 28 20:52:00.586: INFO: node status heartbeat is unchanged for 5.999226127s, waiting for 1m20s May 28 20:52:01.586: INFO: node status heartbeat is unchanged for 6.998898628s, waiting for 1m20s May 28 20:52:02.585: INFO: node status heartbeat is unchanged for 7.998457885s, waiting for 1m20s May 28 20:52:03.585: INFO: node status heartbeat is unchanged for 8.998087813s, waiting for 1m20s May 28 20:52:04.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:04.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:51:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:05.585: INFO: node status heartbeat is unchanged for 999.938032ms, waiting for 1m20s May 28 20:52:06.585: INFO: node status heartbeat is unchanged for 1.999828073s, waiting for 1m20s May 28 20:52:07.585: INFO: node status heartbeat is unchanged for 3.000456298s, waiting for 1m20s May 28 20:52:08.584: INFO: node status heartbeat is unchanged for 3.999398056s, waiting for 1m20s May 28 20:52:09.585: INFO: node status heartbeat is unchanged for 5.000318573s, waiting for 1m20s May 28 20:52:10.587: INFO: node status heartbeat is unchanged for 6.002303719s, waiting for 1m20s May 28 20:52:11.586: INFO: node status heartbeat is unchanged for 7.001128821s, waiting for 1m20s May 28 20:52:12.587: INFO: node status heartbeat is unchanged for 8.001739057s, waiting for 1m20s May 28 20:52:13.585: INFO: node status heartbeat is unchanged for 9.000100611s, waiting for 1m20s May 28 20:52:14.587: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:14.589: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:15.585: INFO: node status heartbeat is unchanged for 998.047326ms, waiting for 1m20s May 28 20:52:16.588: INFO: node status heartbeat is unchanged for 2.001171962s, waiting for 1m20s May 28 20:52:17.584: INFO: node status heartbeat is unchanged for 2.997567434s, waiting for 1m20s May 28 20:52:18.586: INFO: node status heartbeat is unchanged for 3.998843466s, waiting for 1m20s May 28 20:52:19.585: INFO: node status heartbeat is unchanged for 4.998138776s, waiting for 1m20s May 28 20:52:20.585: INFO: node status heartbeat is unchanged for 5.998062942s, waiting for 1m20s May 28 20:52:21.586: INFO: node status heartbeat is unchanged for 6.998792281s, waiting for 1m20s May 28 20:52:22.584: INFO: node status heartbeat is unchanged for 7.997629073s, waiting for 1m20s May 28 20:52:23.585: INFO: node status heartbeat is unchanged for 8.997794583s, waiting for 1m20s May 28 20:52:24.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:24.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:25.585: INFO: node status heartbeat is unchanged for 999.305215ms, waiting for 1m20s May 28 20:52:26.584: INFO: node status heartbeat is unchanged for 1.999170743s, waiting for 1m20s May 28 20:52:27.585: INFO: node status heartbeat is unchanged for 2.999528452s, waiting for 1m20s May 28 20:52:28.584: INFO: node status heartbeat is unchanged for 3.999219061s, waiting for 1m20s May 28 20:52:29.585: INFO: node status heartbeat is unchanged for 4.999350996s, waiting for 1m20s May 28 20:52:30.585: INFO: node status heartbeat is unchanged for 5.999423677s, waiting for 1m20s May 28 20:52:31.585: INFO: node status heartbeat is unchanged for 6.999415989s, waiting for 1m20s May 28 20:52:32.585: INFO: node status heartbeat is unchanged for 7.999890755s, waiting for 1m20s May 28 20:52:33.585: INFO: node status heartbeat is unchanged for 8.999302568s, waiting for 1m20s May 28 20:52:34.584: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:34.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:35.586: INFO: node status heartbeat is unchanged for 1.001420704s, waiting for 1m20s May 28 20:52:36.585: INFO: node status heartbeat is unchanged for 2.000838555s, waiting for 1m20s May 28 20:52:37.584: INFO: node status heartbeat is unchanged for 2.99985944s, waiting for 1m20s May 28 20:52:38.584: INFO: node status heartbeat is unchanged for 4.000175606s, waiting for 1m20s May 28 20:52:39.584: INFO: node status heartbeat is unchanged for 5.000011662s, waiting for 1m20s May 28 20:52:40.585: INFO: node status heartbeat is unchanged for 6.000964786s, waiting for 1m20s May 28 20:52:41.584: INFO: node status heartbeat is unchanged for 7.000206873s, waiting for 1m20s May 28 20:52:42.585: INFO: node status heartbeat is unchanged for 8.000795957s, waiting for 1m20s May 28 20:52:43.584: INFO: node status heartbeat is unchanged for 8.999948504s, waiting for 1m20s May 28 20:52:44.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:44.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:45.584: INFO: node status heartbeat is unchanged for 999.109434ms, waiting for 1m20s May 28 20:52:46.585: INFO: node status heartbeat is unchanged for 1.999745765s, waiting for 1m20s May 28 20:52:47.584: INFO: node status heartbeat is unchanged for 2.99911311s, waiting for 1m20s May 28 20:52:48.584: INFO: node status heartbeat is unchanged for 3.998879457s, waiting for 1m20s May 28 20:52:49.584: INFO: node status heartbeat is unchanged for 4.998675191s, waiting for 1m20s May 28 20:52:50.584: INFO: node status heartbeat is unchanged for 5.998854276s, waiting for 1m20s May 28 20:52:51.586: INFO: node status heartbeat is unchanged for 7.000790926s, waiting for 1m20s May 28 20:52:52.584: INFO: node status heartbeat is unchanged for 7.9989081s, waiting for 1m20s May 28 20:52:53.585: INFO: node status heartbeat is unchanged for 8.999586924s, waiting for 1m20s May 28 20:52:54.584: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:52:54.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:52:55.585: INFO: node status heartbeat is unchanged for 1.000315634s, waiting for 1m20s May 28 20:52:56.586: INFO: node status heartbeat is unchanged for 2.001349765s, waiting for 1m20s May 28 20:52:57.585: INFO: node status heartbeat is unchanged for 3.000872661s, waiting for 1m20s May 28 20:52:58.586: INFO: node status heartbeat is unchanged for 4.001296901s, waiting for 1m20s May 28 20:52:59.585: INFO: node status heartbeat is unchanged for 5.000405812s, waiting for 1m20s May 28 20:53:00.585: INFO: node status heartbeat is unchanged for 6.000613226s, waiting for 1m20s May 28 20:53:01.585: INFO: node status heartbeat is unchanged for 7.000953679s, waiting for 1m20s May 28 20:53:02.585: INFO: node status heartbeat is unchanged for 8.000660051s, waiting for 1m20s May 28 20:53:03.584: INFO: node status heartbeat is unchanged for 8.999891998s, waiting for 1m20s May 28 20:53:04.584: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:04.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:52:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:05.586: INFO: node status heartbeat is unchanged for 1.0013082s, waiting for 1m20s May 28 20:53:06.585: INFO: node status heartbeat is unchanged for 2.00032922s, waiting for 1m20s May 28 20:53:07.586: INFO: node status heartbeat is unchanged for 3.001121813s, waiting for 1m20s May 28 20:53:08.586: INFO: node status heartbeat is unchanged for 4.001191372s, waiting for 1m20s May 28 20:53:09.584: INFO: node status heartbeat is unchanged for 4.999975442s, waiting for 1m20s May 28 20:53:10.586: INFO: node status heartbeat is unchanged for 6.001387659s, waiting for 1m20s May 28 20:53:11.585: INFO: node status heartbeat is unchanged for 7.000500201s, waiting for 1m20s May 28 20:53:12.585: INFO: node status heartbeat is unchanged for 8.000732684s, waiting for 1m20s May 28 20:53:13.585: INFO: node status heartbeat is unchanged for 9.000240319s, waiting for 1m20s May 28 20:53:14.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:14.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:15.585: INFO: node status heartbeat is unchanged for 1.000386649s, waiting for 1m20s May 28 20:53:16.585: INFO: node status heartbeat is unchanged for 2.000131719s, waiting for 1m20s May 28 20:53:17.585: INFO: node status heartbeat is unchanged for 3.000742203s, waiting for 1m20s May 28 20:53:18.587: INFO: node status heartbeat is unchanged for 4.00207507s, waiting for 1m20s May 28 20:53:19.585: INFO: node status heartbeat is unchanged for 5.000534361s, waiting for 1m20s May 28 20:53:20.585: INFO: node status heartbeat is unchanged for 6.000352336s, waiting for 1m20s May 28 20:53:21.586: INFO: node status heartbeat is unchanged for 7.001042937s, waiting for 1m20s May 28 20:53:22.587: INFO: node status heartbeat is unchanged for 8.001955418s, waiting for 1m20s May 28 20:53:23.585: INFO: node status heartbeat is unchanged for 9.000059344s, waiting for 1m20s May 28 20:53:24.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:24.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:25.586: INFO: node status heartbeat is unchanged for 1.001357951s, waiting for 1m20s May 28 20:53:26.586: INFO: node status heartbeat is unchanged for 2.000941743s, waiting for 1m20s May 28 20:53:27.585: INFO: node status heartbeat is unchanged for 3.000049892s, waiting for 1m20s May 28 20:53:28.585: INFO: node status heartbeat is unchanged for 4.000611028s, waiting for 1m20s May 28 20:53:29.584: INFO: node status heartbeat is unchanged for 4.999521028s, waiting for 1m20s May 28 20:53:30.586: INFO: node status heartbeat is unchanged for 6.00071043s, waiting for 1m20s May 28 20:53:31.585: INFO: node status heartbeat is unchanged for 7.000420527s, waiting for 1m20s May 28 20:53:32.586: INFO: node status heartbeat is unchanged for 8.001027044s, waiting for 1m20s May 28 20:53:33.585: INFO: node status heartbeat is unchanged for 8.999864438s, waiting for 1m20s May 28 20:53:34.586: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:34.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:35.588: INFO: node status heartbeat is unchanged for 1.002159501s, waiting for 1m20s May 28 20:53:36.587: INFO: node status heartbeat is unchanged for 2.001611154s, waiting for 1m20s May 28 20:53:37.585: INFO: node status heartbeat is unchanged for 2.99925859s, waiting for 1m20s May 28 20:53:38.585: INFO: node status heartbeat is unchanged for 3.999708229s, waiting for 1m20s May 28 20:53:39.585: INFO: node status heartbeat is unchanged for 4.999514451s, waiting for 1m20s May 28 20:53:40.585: INFO: node status heartbeat is unchanged for 5.999845038s, waiting for 1m20s May 28 20:53:41.585: INFO: node status heartbeat is unchanged for 6.999765127s, waiting for 1m20s May 28 20:53:42.585: INFO: node status heartbeat is unchanged for 7.999159087s, waiting for 1m20s May 28 20:53:43.585: INFO: node status heartbeat is unchanged for 8.999316829s, waiting for 1m20s May 28 20:53:44.584: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:44.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:45.585: INFO: node status heartbeat is unchanged for 1.000805596s, waiting for 1m20s May 28 20:53:46.584: INFO: node status heartbeat is unchanged for 1.999985888s, waiting for 1m20s May 28 20:53:47.587: INFO: node status heartbeat is unchanged for 3.002365564s, waiting for 1m20s May 28 20:53:48.585: INFO: node status heartbeat is unchanged for 4.000725484s, waiting for 1m20s May 28 20:53:49.585: INFO: node status heartbeat is unchanged for 5.000416601s, waiting for 1m20s May 28 20:53:50.588: INFO: node status heartbeat is unchanged for 6.003331546s, waiting for 1m20s May 28 20:53:51.585: INFO: node status heartbeat is unchanged for 7.000422566s, waiting for 1m20s May 28 20:53:52.585: INFO: node status heartbeat is unchanged for 8.000988777s, waiting for 1m20s May 28 20:53:53.585: INFO: node status heartbeat is unchanged for 9.000416915s, waiting for 1m20s May 28 20:53:54.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:53:54.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:53:55.586: INFO: node status heartbeat is unchanged for 1.001369018s, waiting for 1m20s May 28 20:53:56.585: INFO: node status heartbeat is unchanged for 2.000243684s, waiting for 1m20s May 28 20:53:57.586: INFO: node status heartbeat is unchanged for 3.001697398s, waiting for 1m20s May 28 20:53:58.585: INFO: node status heartbeat is unchanged for 4.001007393s, waiting for 1m20s May 28 20:53:59.585: INFO: node status heartbeat is unchanged for 5.000405688s, waiting for 1m20s May 28 20:54:00.586: INFO: node status heartbeat is unchanged for 6.001115091s, waiting for 1m20s May 28 20:54:01.585: INFO: node status heartbeat is unchanged for 7.000092156s, waiting for 1m20s May 28 20:54:02.585: INFO: node status heartbeat is unchanged for 8.000425375s, waiting for 1m20s May 28 20:54:03.584: INFO: node status heartbeat is unchanged for 8.999754521s, waiting for 1m20s May 28 20:54:04.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:54:04.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:53:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:54:05.585: INFO: node status heartbeat is unchanged for 999.91329ms, waiting for 1m20s May 28 20:54:06.587: INFO: node status heartbeat is unchanged for 2.00207589s, waiting for 1m20s May 28 20:54:07.587: INFO: node status heartbeat is unchanged for 3.001981297s, waiting for 1m20s May 28 20:54:08.586: INFO: node status heartbeat is unchanged for 4.000995229s, waiting for 1m20s May 28 20:54:09.584: INFO: node status heartbeat is unchanged for 4.999347309s, waiting for 1m20s May 28 20:54:10.585: INFO: node status heartbeat is unchanged for 5.999986645s, waiting for 1m20s May 28 20:54:11.585: INFO: node status heartbeat is unchanged for 6.999686912s, waiting for 1m20s May 28 20:54:12.586: INFO: node status heartbeat is unchanged for 8.000749697s, waiting for 1m20s May 28 20:54:13.585: INFO: node status heartbeat is unchanged for 9.000407652s, waiting for 1m20s May 28 20:54:14.584: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:54:14.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:54:15.586: INFO: node status heartbeat is unchanged for 1.001637887s, waiting for 1m20s May 28 20:54:16.585: INFO: node status heartbeat is unchanged for 2.0010516s, waiting for 1m20s May 28 20:54:17.585: INFO: node status heartbeat is unchanged for 3.000888106s, waiting for 1m20s May 28 20:54:18.585: INFO: node status heartbeat is unchanged for 4.000703237s, waiting for 1m20s May 28 20:54:19.585: INFO: node status heartbeat is unchanged for 5.000712663s, waiting for 1m20s May 28 20:54:20.587: INFO: node status heartbeat is unchanged for 6.002531406s, waiting for 1m20s May 28 20:54:21.584: INFO: node status heartbeat is unchanged for 7.000274265s, waiting for 1m20s May 28 20:54:22.586: INFO: node status heartbeat is unchanged for 8.001336271s, waiting for 1m20s May 28 20:54:23.585: INFO: node status heartbeat is unchanged for 9.000555508s, waiting for 1m20s May 28 20:54:24.585: INFO: node status heartbeat is unchanged for 10.000373918s, waiting for 1m20s May 28 20:54:25.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:54:25.587: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:54:26.587: INFO: node status heartbeat is unchanged for 1.001994935s, waiting for 1m20s May 28 20:54:27.585: INFO: node status heartbeat is unchanged for 2.000044905s, waiting for 1m20s May 28 20:54:28.585: INFO: node status heartbeat is unchanged for 3.000271434s, waiting for 1m20s May 28 20:54:29.585: INFO: node status heartbeat is unchanged for 4.000362583s, waiting for 1m20s May 28 20:54:30.585: INFO: node status heartbeat is unchanged for 5.000232189s, waiting for 1m20s May 28 20:54:31.586: INFO: node status heartbeat is unchanged for 6.001532087s, waiting for 1m20s May 28 20:54:32.585: INFO: node status heartbeat is unchanged for 7.00020862s, waiting for 1m20s May 28 20:54:33.584: INFO: node status heartbeat is unchanged for 7.999760716s, waiting for 1m20s May 28 20:54:34.586: INFO: node status heartbeat is unchanged for 9.000867413s, waiting for 1m20s May 28 20:54:35.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:54:35.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:54:36.585: INFO: node status heartbeat is unchanged for 1.000245941s, waiting for 1m20s May 28 20:54:37.586: INFO: node status heartbeat is unchanged for 2.000641127s, waiting for 1m20s May 28 20:54:38.585: INFO: node status heartbeat is unchanged for 2.999708671s, waiting for 1m20s May 28 20:54:39.585: INFO: node status heartbeat is unchanged for 3.999715092s, waiting for 1m20s May 28 20:54:40.585: INFO: node status heartbeat is unchanged for 4.999931968s, waiting for 1m20s May 28 20:54:41.585: INFO: node status heartbeat is unchanged for 5.999681268s, waiting for 1m20s May 28 20:54:42.586: INFO: node status heartbeat is unchanged for 7.000716676s, waiting for 1m20s May 28 20:54:43.585: INFO: node status heartbeat is unchanged for 8.000286987s, waiting for 1m20s May 28 20:54:44.585: INFO: node status heartbeat is unchanged for 9.000334635s, waiting for 1m20s May 28 20:54:45.585: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 28 20:54:45.588: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-28 20:01:05 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-05-28 20:54:44 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-05-28 19:58:22 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-28 19:59:04 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } May 28 20:54:46.585: INFO: node status heartbeat is unchanged for 999.752977ms, waiting for 1m20s May 28 20:54:47.585: INFO: node status heartbeat is unchanged for 1.999826213s, waiting for 1m20s May 28 20:54:47.589: INFO: node status heartbeat is unchanged for 2.0034722s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 20:54:47.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4721" for this suite. • [SLOW TEST:300.049 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":422,"failed":0} May 28 20:54:47.607: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 20:49:44.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 STEP: getting restart delay when capped May 28 21:01:30.287: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-05-28 20:56:24 +0000 UTC restartedAt=2021-05-28 21:01:30 +0000 UTC (5m6s) May 28 21:06:50.508: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-28 21:01:35 +0000 UTC restartedAt=2021-05-28 21:06:48 +0000 UTC (5m13s) May 28 21:12:02.701: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-05-28 21:06:53 +0000 UTC restartedAt=2021-05-28 21:12:01 +0000 UTC (5m8s) STEP: getting restart delay after a capped delay May 28 21:17:12.943: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-05-28 21:12:06 +0000 UTC restartedAt=2021-05-28 21:17:12 +0000 UTC (5m6s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 28 21:17:12.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8872" for this suite. • [SLOW TEST:1648.057 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":79,"failed":0} May 28 21:17:12.955: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":3,"skipped":679,"failed":0} May 28 20:53:56.291: INFO: Running AfterSuite actions on all nodes May 28 21:17:12.989: INFO: Running AfterSuite actions on node 1 May 28 21:17:12.990: INFO: Skipping dumping logs from cluster Ran 30 of 5484 Specs in 1654.638 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 27m36.082650442s Test Suite Passed