Running Suite: Kubernetes e2e suite =================================== Random Seed: 1629284100 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes Aug 18 10:55:02.112: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.115: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 18 10:55:02.139: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 18 10:55:02.201: INFO: The status of Pod cmk-init-discover-node1-bxznx is Succeeded, skipping waiting Aug 18 10:55:02.201: INFO: The status of Pod cmk-init-discover-node2-6b2kz is Succeeded, skipping waiting Aug 18 10:55:02.201: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 18 10:55:02.201: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Aug 18 10:55:02.201: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 18 10:55:02.223: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Aug 18 10:55:02.223: INFO: e2e test version: v1.19.12 Aug 18 10:55:02.224: INFO: kube-apiserver version: v1.19.8 Aug 18 10:55:02.224: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.230: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Aug 18 10:55:02.225: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.246: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Aug 18 10:55:02.233: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.252: INFO: Cluster IP family: ipv4 S ------------------------------ Aug 18 10:55:02.229: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.253: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Aug 18 10:55:02.236: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.259: INFO: Cluster IP family: ipv4 S ------------------------------ Aug 18 10:55:02.236: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.260: INFO: Cluster IP family: ipv4 SSS ------------------------------ Aug 18 10:55:02.238: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.262: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Aug 18 10:55:02.252: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.271: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Aug 18 10:55:02.255: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.276: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Aug 18 10:55:02.274: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:02.295: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Aug 18 10:55:02.353: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.354: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:02.357: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:02.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6121" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:02.370141 22 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 170 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000af0750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0045f20a0, 0xc000af0750, 0xc0045f20a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000af0750, 0x6717efd94aa90c, 0xc000af0778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0xaa, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00419a180, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0016ccd20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0016ccd20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ab7ff0, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000af16c0, 0xc00338bb30, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00338bb30, 0x0, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00338bb30, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0041fc000, 0xc00338bb30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0041fc000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0041fc000, 0xc0041f2018) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7fb2ba6fd560, 0xc00292ef00, 0x4c2974b, 0x14, 0xc002dc39e0, 0x3, 0x3, 0x539f360, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc00292ef00, 0x4c2974b, 0x14, 0xc001b99380, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc00292ef00, 0x4c2974b, 0x14, 0xc0029c4480, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00292ef00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00292ef00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00292ef00, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Aug 18 10:55:02.513: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.515: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:02.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2234" for this suite. •SSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Aug 18 10:55:02.577: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.578: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Aug 18 10:55:02.581: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:02.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-9654" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:03.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:03.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5505" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:03.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:03.437: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:03.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-4318" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:03.447798 29 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 133 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001d14750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0025992a0, 0xc001d14750, 0xc0025992a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc001d14750, 0x6717f019866b5b, 0xc001d14778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0xd9, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0024e30e0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00052e8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00052e8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00110c828, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc001d156c0, 0xc003aed950, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003aed950, 0x0, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003aed950, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003630dc0, 0xc003aed950, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003630dc0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003630dc0, 0xc003f7e778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f05c6b0f058, 0xc001703380, 0x4c2974b, 0x14, 0xc000feaf00, 0x3, 0x3, 0x539f360, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc001703380, 0x4c2974b, 0x14, 0xc002c1d200, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc001703380, 0x4c2974b, 0x14, 0xc00234b840, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001703380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001703380, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:03.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Aug 18 10:55:03.058: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:03.060: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:05.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-540" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:05.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:05.257: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:05.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-7906" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:05.267590 27 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 228 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004b3c750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002f67c00, 0xc004b3c750, 0xc002f67c00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004b3c750, 0x6717f085fe4c1c, 0xc004b3c778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0xa8, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00437fce0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00184bc20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00184bc20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000bdbff0, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc004b3d6c0, 0xc0026a5770, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0026a5770, 0x0, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0026a5770, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00287a000, 0xc0026a5770, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00287a000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00287a000, 0xc002870030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f1faedf5718, 0xc001a33b00, 0x4c2974b, 0x14, 0xc00359a030, 0x3, 0x3, 0x539f360, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc001a33b00, 0x4c2974b, 0x14, 0xc000eb9dc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc001a33b00, 0x4c2974b, 0x14, 0xc0039c6d80, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a33b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a33b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a33b00, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:05.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 Aug 18 10:55:05.428: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:05.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3011" for this suite. S [SKIPPING] [0.028 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Aug 18 10:55:02.375: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.377: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Aug 18 10:55:02.392: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679" in namespace "security-context-test-3595" to be "Succeeded or Failed" Aug 18 10:55:02.394: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679": Phase="Pending", Reason="", readiness=false. Elapsed: 1.947883ms Aug 18 10:55:04.397: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004993973s Aug 18 10:55:06.401: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009041156s Aug 18 10:55:08.405: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012606222s Aug 18 10:55:10.409: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679": Phase="Failed", Reason="", readiness=false. Elapsed: 8.016867582s Aug 18 10:55:10.409: INFO: Pod "busybox-readonly-true-d30a107a-0c0e-49dc-864c-96e5008c3679" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:10.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3595" for this suite. • [SLOW TEST:8.075 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Aug 18 10:55:02.486: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-9354" to be "Succeeded or Failed" Aug 18 10:55:02.488: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308626ms Aug 18 10:55:04.491: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004952819s Aug 18 10:55:06.494: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007591205s Aug 18 10:55:08.497: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010683056s Aug 18 10:55:10.500: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.013652135s Aug 18 10:55:10.500: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:10.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9354" for this suite. • [SLOW TEST:8.063 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3536" for this suite. • [SLOW TEST:8.042 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:10.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:10.751: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:10.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-3390" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:10.762034 22 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 170 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000af0750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000bae2c0, 0xc000af0750, 0xc000bae2c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000af0750, 0x6717f1cd7d8e7e, 0xc000af0778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0xa0, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00158df50, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0016ccd20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0016ccd20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ab7ff0, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000af16c0, 0xc00338b680, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00338b680, 0x0, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00338b680, 0x52ea280, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0041fc000, 0xc00338b680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0041fc000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0041fc000, 0xc0041f2018) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7fb2ba6fd560, 0xc00292ef00, 0x4c2974b, 0x14, 0xc002dc39e0, 0x3, 0x3, 0x539f360, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc00292ef00, 0x4c2974b, 0x14, 0xc001b99380, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc00292ef00, 0x4c2974b, 0x14, 0xc0029c4480, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00292ef00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00292ef00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00292ef00, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Aug 18 10:55:02.371: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.373: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Aug 18 10:55:02.381: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod Aug 18 10:55:02.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3896 create -f -' Aug 18 10:55:02.882: INFO: stderr: "" Aug 18 10:55:02.882: INFO: stdout: "secret/test-secret created\n" Aug 18 10:55:02.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3896 create -f -' Aug 18 10:55:03.171: INFO: stderr: "" Aug 18 10:55:03.171: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Aug 18 10:55:11.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3896 logs secret-test-pod test-container' Aug 18 10:55:11.336: INFO: stderr: "" Aug 18 10:55:11.336: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:11.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3896" for this suite. • [SLOW TEST:9.003 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:03.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Aug 18 10:55:03.805: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-6095" to be "Succeeded or Failed" Aug 18 10:55:03.813: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023801ms Aug 18 10:55:05.816: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010610069s Aug 18 10:55:07.818: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013028842s Aug 18 10:55:09.821: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016302653s Aug 18 10:55:11.824: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019071788s Aug 18 10:55:11.824: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:11.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6095" for this suite. • [SLOW TEST:8.080 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Aug 18 10:55:02.385: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.388: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3443" for this suite. • [SLOW TEST:10.058 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":1,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Aug 18 10:55:02.972: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.974: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Aug 18 10:55:02.988: INFO: Waiting up to 5m0s for pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8" in namespace "security-context-test-1371" to be "Succeeded or Failed" Aug 18 10:55:02.991: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5138ms Aug 18 10:55:04.995: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006347422s Aug 18 10:55:06.998: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009967081s Aug 18 10:55:09.001: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013139027s Aug 18 10:55:11.004: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015937307s Aug 18 10:55:13.008: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020099987s Aug 18 10:55:15.011: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022961886s Aug 18 10:55:15.011: INFO: Pod "busybox-user-0-0cd273b7-2527-4766-a24b-bf78237a46d8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:15.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1371" for this suite. • [SLOW TEST:12.067 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:05.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:16.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7721" for this suite. • [SLOW TEST:11.104 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":2,"skipped":609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:10.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:16.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5805" for this suite. • [SLOW TEST:6.067 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:15.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:19.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3893" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:10.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Aug 18 10:55:10.562: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092" in namespace "security-context-test-5138" to be "Succeeded or Failed" Aug 18 10:55:10.563: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Pending", Reason="", readiness=false. Elapsed: 1.755707ms Aug 18 10:55:12.567: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00487205s Aug 18 10:55:14.573: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011009543s Aug 18 10:55:16.575: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013682658s Aug 18 10:55:18.581: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019076248s Aug 18 10:55:20.584: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02216921s Aug 18 10:55:20.584: INFO: Pod "alpine-nnp-nil-50b434d9-4ca0-4960-a24b-437858517092" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5138" for this suite. • [SLOW TEST:10.070 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:20.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:20.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7093" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":3,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:19.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:22.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9353" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:22.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:22.958: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:22.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6275" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:22.966621 34 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 226 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0013b2750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0007baf00, 0xc0013b2750, 0xc0007baf00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0013b2750, 0x6717f4a4f017fc, 0xc0013b2778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0xa2, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003e4ac30, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0018db2c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0018db2c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00052bbf0, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0013b36c0, 0xc001ec9a40, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001ec9a40, 0x0, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001ec9a40, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00181a780, 0xc001ec9a40, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00181a780, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00181a780, 0xc002989808) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f13f0b98d48, 0xc000e31800, 0x4c2974b, 0x14, 0xc00442be60, 0x3, 0x3, 0x539f360, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc000e31800, 0x4c2974b, 0x14, 0xc00323be80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc000e31800, 0x4c2974b, 0x14, 0xc00441df20, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e31800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000e31800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000e31800, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:16.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Aug 18 10:55:17.011: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68" in namespace "security-context-test-9413" to be "Succeeded or Failed" Aug 18 10:55:17.014: INFO: Pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971322ms Aug 18 10:55:19.017: INFO: Pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005682948s Aug 18 10:55:21.019: INFO: Pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008438587s Aug 18 10:55:23.022: INFO: Pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011339232s Aug 18 10:55:23.022: INFO: Pod "alpine-nnp-true-6360456a-3c72-40ff-934b-563326e61c68" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9413" for this suite. • [SLOW TEST:6.059 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod Aug 18 10:55:02.773: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.775: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Aug 18 10:55:22.799: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7475 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 18 10:55:22.799: INFO: >>> kubeConfig: /root/.kube/config Aug 18 10:55:22.925: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-7475 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 18 10:55:22.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Aug 18 10:55:23.098: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7475 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 18 10:55:23.098: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:23.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-7475" for this suite. • [SLOW TEST:20.455 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ SSS ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:23.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 18 10:55:23.439: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:23.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6281" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0818 10:55:23.450270 27 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 228 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f1360, 0x7548830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f1360, 0x7548830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00343a750, 0xcb4c00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00234cd80, 0xc00343a750, 0xc00234cd80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00343a750, 0x6717f4c1c4f282, 0xc00343a778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x77169a0, 0x97, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002f22840, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00184bc20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00184bc20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000bdbff0, 0x52ea280, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00343b6c0, 0xc0026a5860, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0026a5860, 0x0, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0026a5860, 0x52ea280, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00287a000, 0xc0026a5860, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00287a000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00287a000, 0xc002870030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f1faedf5718, 0xc001a33b00, 0x4c2974b, 0x14, 0xc00359a030, 0x3, 0x3, 0x539f360, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52eeee0, 0xc001a33b00, 0x4c2974b, 0x14, 0xc000eb9dc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52eeee0, 0xc001a33b00, 0x4c2974b, 0x14, 0xc0039c6d80, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a33b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a33b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a33b00, 0x4deb2c0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:12.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:24.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7853" for this suite. • [SLOW TEST:12.046 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:24.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Aug 18 10:55:24.610: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:24.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-6491" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:17.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 18 10:55:26.095: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:26.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1006" for this suite. • [SLOW TEST:9.079 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:26.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:30.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3985" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":5,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Aug 18 10:55:30.481: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:23.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Aug 18 10:55:23.944: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod Aug 18 10:55:24.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5944 create -f -' Aug 18 10:55:24.349: INFO: stderr: "" Aug 18 10:55:24.349: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Aug 18 10:55:30.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5944 logs dapi-test-pod test-container' Aug 18 10:55:30.510: INFO: stderr: "" Aug 18 10:55:30.510: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5944\nMY_POD_IP=10.244.3.190\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Aug 18 10:55:30.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5944 logs dapi-test-pod test-container' Aug 18 10:55:30.681: INFO: stderr: "" Aug 18 10:55:30.681: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5944\nMY_POD_IP=10.244.3.190\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:30.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5944" for this suite. • [SLOW TEST:6.769 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":2,"skipped":554,"failed":0} Aug 18 10:55:30.692: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:25.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Aug 18 10:55:25.511: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824" in namespace "security-context-test-1401" to be "Succeeded or Failed" Aug 18 10:55:25.514: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020725ms Aug 18 10:55:27.517: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006604724s Aug 18 10:55:29.521: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010203074s Aug 18 10:55:31.524: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013579845s Aug 18 10:55:33.527: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016770139s Aug 18 10:55:33.527: INFO: Pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824" satisfied condition "Succeeded or Failed" Aug 18 10:55:33.535: INFO: Got logs for pod "busybox-privileged-true-35dda1e7-0768-4884-a6dd-5ff2f1db0824": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:33.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1401" for this suite. • [SLOW TEST:8.068 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":4,"skipped":1535,"failed":0} Aug 18 10:55:33.544: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:23.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:41.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4059" for this suite. • [SLOW TEST:18.083 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":4,"skipped":1148,"failed":0} Aug 18 10:55:41.908: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:20.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-3cfbdf64-2541-458b-9790-d25c04ddca13 in namespace container-probe-5989 Aug 18 10:55:27.025: INFO: Started pod liveness-3cfbdf64-2541-458b-9790-d25c04ddca13 in namespace container-probe-5989 STEP: checking the pod's current state and verifying that restartCount is present Aug 18 10:55:27.027: INFO: Initial restart count of pod liveness-3cfbdf64-2541-458b-9790-d25c04ddca13 is 0 Aug 18 10:55:49.066: INFO: Restart count of pod container-probe-5989/liveness-3cfbdf64-2541-458b-9790-d25c04ddca13 is now 1 (22.038323439s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:55:49.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5989" for this suite. • [SLOW TEST:28.096 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":249,"failed":0} Aug 18 10:55:49.085: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:11.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Aug 18 10:55:11.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6163 create -f -' Aug 18 10:55:11.765: INFO: stderr: "" Aug 18 10:55:11.765: INFO: stdout: "pod/liveness-exec created\n" Aug 18 10:55:11.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6163 create -f -' Aug 18 10:55:12.046: INFO: stderr: "" Aug 18 10:55:12.046: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Aug 18 10:55:20.053: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:22.056: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:24.053: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:24.058: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:26.056: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:26.060: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:28.059: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:28.062: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:30.062: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:30.065: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:32.066: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:32.067: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:34.069: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:34.070: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:36.073: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:36.073: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:38.075: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:38.076: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:40.078: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:40.078: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:42.081: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:42.081: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:44.085: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:44.085: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:46.088: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:46.088: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:48.092: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:48.092: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:50.096: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:50.096: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:52.099: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:52.099: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:54.103: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:54.103: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:56.106: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:55:56.106: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:58.110: INFO: Pod: liveness-http, restart count:0 Aug 18 10:55:58.110: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:00.114: INFO: Pod: liveness-http, restart count:0 Aug 18 10:56:00.114: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:02.117: INFO: Pod: liveness-http, restart count:0 Aug 18 10:56:02.117: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:04.122: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:04.122: INFO: Pod: liveness-http, restart count:1 Aug 18 10:56:04.122: INFO: Saw liveness-http restart, succeeded... Aug 18 10:56:06.126: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:08.130: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:10.134: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:12.137: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:14.141: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:16.144: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:18.147: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:20.150: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:22.155: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:24.158: INFO: Pod: liveness-exec, restart count:0 Aug 18 10:56:26.162: INFO: Pod: liveness-exec, restart count:1 Aug 18 10:56:26.162: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:56:26.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6163" for this suite. • [SLOW TEST:74.777 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":2,"skipped":50,"failed":0} Aug 18 10:56:26.172: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:02.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Aug 18 10:55:02.326: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 18 10:55:02.329: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 STEP: getting restart delay-0 Aug 18 10:56:12.372: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-08-18 10:55:46 +0000 UTC restartedAt=2021-08-18 10:56:10 +0000 UTC (24s) STEP: getting restart delay-1 Aug 18 10:57:02.568: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-08-18 10:56:15 +0000 UTC restartedAt=2021-08-18 10:57:01 +0000 UTC (46s) STEP: getting restart delay-2 Aug 18 10:58:39.934: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-08-18 10:57:06 +0000 UTC restartedAt=2021-08-18 10:58:38 +0000 UTC (1m32s) STEP: updating the image Aug 18 10:58:40.445: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Aug 18 10:59:06.512: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-08-18 10:58:51 +0000 UTC restartedAt=2021-08-18 10:59:05 +0000 UTC (14s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:59:06.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7561" for this suite. • [SLOW TEST:244.218 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":7,"failed":0} Aug 18 10:59:06.524: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:10.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-ed8c8830-2428-4591-801c-9676078ee12d in namespace container-probe-161 Aug 18 10:55:22.812: INFO: Started pod liveness-ed8c8830-2428-4591-801c-9676078ee12d in namespace container-probe-161 STEP: checking the pod's current state and verifying that restartCount is present Aug 18 10:55:22.815: INFO: Initial restart count of pod liveness-ed8c8830-2428-4591-801c-9676078ee12d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 10:59:23.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-161" for this suite. • [SLOW TEST:252.521 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":2,"skipped":162,"failed":0} Aug 18 10:59:23.289: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:23.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Aug 18 10:55:23.976: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Aug 18 10:55:24.986: INFO: node status heartbeat is unchanged for 1.002810241s, waiting for 1m20s Aug 18 10:55:25.988: INFO: node status heartbeat is unchanged for 2.004141386s, waiting for 1m20s Aug 18 10:55:26.990: INFO: node status heartbeat is unchanged for 3.006876314s, waiting for 1m20s Aug 18 10:55:27.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:55:27.991: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    NodeInfo: v1.NodeSystemInfo{MachineID: "dc1f286135c145349b8a016880b65a2f", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "037e7e2d-94e9-42f4-a719-896e5005ac70", KernelVersion: "3.10.0-1160.36.2.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"},    Images: []v1.ContainerImage{    ... // 23 identical elements    {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814},    {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, +  { +  Names: []string{ +  "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", +  "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", +  }, +  SizeBytes: 6757579, +  },    {Names: []string{"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb", "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0", +  "gcr.io/authenticated-image-pulling/alpine:3.7", +  }, +  SizeBytes: 4206620, +  }, +  { +  Names: []string{ +  "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", +  "busybox:1.29", +  }, +  SizeBytes: 1154361, +  },    {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369},    {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696},    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Aug 18 10:55:28.989: INFO: node status heartbeat is unchanged for 1.001518355s, waiting for 1m20s Aug 18 10:55:29.988: INFO: node status heartbeat is unchanged for 2.000759181s, waiting for 1m20s Aug 18 10:55:30.988: INFO: node status heartbeat is unchanged for 3.000477993s, waiting for 1m20s Aug 18 10:55:31.988: INFO: node status heartbeat is unchanged for 4.00027276s, waiting for 1m20s Aug 18 10:55:32.987: INFO: node status heartbeat is unchanged for 4.999258117s, waiting for 1m20s Aug 18 10:55:33.988: INFO: node status heartbeat is unchanged for 6.000151049s, waiting for 1m20s Aug 18 10:55:34.987: INFO: node status heartbeat is unchanged for 7.000014131s, waiting for 1m20s Aug 18 10:55:35.987: INFO: node status heartbeat is unchanged for 7.99981216s, waiting for 1m20s Aug 18 10:55:36.987: INFO: node status heartbeat is unchanged for 8.999727832s, waiting for 1m20s Aug 18 10:55:37.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:55:37.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:55:38.989: INFO: node status heartbeat is unchanged for 1.001529745s, waiting for 1m20s Aug 18 10:55:39.987: INFO: node status heartbeat is unchanged for 1.999538584s, waiting for 1m20s Aug 18 10:55:40.987: INFO: node status heartbeat is unchanged for 2.999452546s, waiting for 1m20s Aug 18 10:55:41.987: INFO: node status heartbeat is unchanged for 3.999590666s, waiting for 1m20s Aug 18 10:55:42.987: INFO: node status heartbeat is unchanged for 4.999567107s, waiting for 1m20s Aug 18 10:55:43.988: INFO: node status heartbeat is unchanged for 6.000920014s, waiting for 1m20s Aug 18 10:55:44.988: INFO: node status heartbeat is unchanged for 7.000412409s, waiting for 1m20s Aug 18 10:55:45.988: INFO: node status heartbeat is unchanged for 8.000339863s, waiting for 1m20s Aug 18 10:55:46.987: INFO: node status heartbeat is unchanged for 9.00018917s, waiting for 1m20s Aug 18 10:55:47.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:55:47.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:37 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:55:48.988: INFO: node status heartbeat is unchanged for 1.000418092s, waiting for 1m20s Aug 18 10:55:49.987: INFO: node status heartbeat is unchanged for 1.999476597s, waiting for 1m20s Aug 18 10:55:50.988: INFO: node status heartbeat is unchanged for 3.001184841s, waiting for 1m20s Aug 18 10:55:51.987: INFO: node status heartbeat is unchanged for 3.999415187s, waiting for 1m20s Aug 18 10:55:52.987: INFO: node status heartbeat is unchanged for 4.999212524s, waiting for 1m20s Aug 18 10:55:53.988: INFO: node status heartbeat is unchanged for 6.000338195s, waiting for 1m20s Aug 18 10:55:54.987: INFO: node status heartbeat is unchanged for 6.999347904s, waiting for 1m20s Aug 18 10:55:55.987: INFO: node status heartbeat is unchanged for 8.00016732s, waiting for 1m20s Aug 18 10:55:56.988: INFO: node status heartbeat is unchanged for 9.000400979s, waiting for 1m20s Aug 18 10:55:57.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:55:57.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:47 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:55:58.987: INFO: node status heartbeat is unchanged for 999.775912ms, waiting for 1m20s Aug 18 10:55:59.986: INFO: node status heartbeat is unchanged for 1.999041329s, waiting for 1m20s Aug 18 10:56:00.987: INFO: node status heartbeat is unchanged for 2.999599042s, waiting for 1m20s Aug 18 10:56:01.988: INFO: node status heartbeat is unchanged for 4.001127337s, waiting for 1m20s Aug 18 10:56:02.988: INFO: node status heartbeat is unchanged for 5.000424541s, waiting for 1m20s Aug 18 10:56:03.987: INFO: node status heartbeat is unchanged for 5.999534451s, waiting for 1m20s Aug 18 10:56:04.988: INFO: node status heartbeat is unchanged for 7.000315716s, waiting for 1m20s Aug 18 10:56:05.987: INFO: node status heartbeat is unchanged for 7.999225226s, waiting for 1m20s Aug 18 10:56:06.987: INFO: node status heartbeat is unchanged for 8.999435232s, waiting for 1m20s Aug 18 10:56:07.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:56:07.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:55:57 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:08.987: INFO: node status heartbeat is unchanged for 999.536325ms, waiting for 1m20s Aug 18 10:56:09.988: INFO: node status heartbeat is unchanged for 2.000561851s, waiting for 1m20s Aug 18 10:56:10.987: INFO: node status heartbeat is unchanged for 3.000164654s, waiting for 1m20s Aug 18 10:56:11.987: INFO: node status heartbeat is unchanged for 3.999741549s, waiting for 1m20s Aug 18 10:56:12.988: INFO: node status heartbeat is unchanged for 5.001216895s, waiting for 1m20s Aug 18 10:56:13.987: INFO: node status heartbeat is unchanged for 6.000094062s, waiting for 1m20s Aug 18 10:56:14.987: INFO: node status heartbeat is unchanged for 7.000333795s, waiting for 1m20s Aug 18 10:56:15.987: INFO: node status heartbeat is unchanged for 8.00005725s, waiting for 1m20s Aug 18 10:56:16.987: INFO: node status heartbeat is unchanged for 9.000102041s, waiting for 1m20s Aug 18 10:56:17.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:56:17.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:07 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:18.987: INFO: node status heartbeat is unchanged for 1.000031178s, waiting for 1m20s Aug 18 10:56:19.987: INFO: node status heartbeat is unchanged for 1.999897831s, waiting for 1m20s Aug 18 10:56:20.987: INFO: node status heartbeat is unchanged for 3.000202198s, waiting for 1m20s Aug 18 10:56:21.987: INFO: node status heartbeat is unchanged for 4.000025352s, waiting for 1m20s Aug 18 10:56:22.988: INFO: node status heartbeat is unchanged for 5.000787671s, waiting for 1m20s Aug 18 10:56:23.988: INFO: node status heartbeat is unchanged for 6.000881397s, waiting for 1m20s Aug 18 10:56:24.987: INFO: node status heartbeat is unchanged for 6.999822291s, waiting for 1m20s Aug 18 10:56:25.988: INFO: node status heartbeat is unchanged for 8.001202567s, waiting for 1m20s Aug 18 10:56:26.987: INFO: node status heartbeat is unchanged for 9.000429525s, waiting for 1m20s Aug 18 10:56:27.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:56:27.989: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:17 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:28.987: INFO: node status heartbeat is unchanged for 1.000728749s, waiting for 1m20s Aug 18 10:56:29.986: INFO: node status heartbeat is unchanged for 1.999882632s, waiting for 1m20s Aug 18 10:56:30.988: INFO: node status heartbeat is unchanged for 3.001625064s, waiting for 1m20s Aug 18 10:56:31.987: INFO: node status heartbeat is unchanged for 3.999913598s, waiting for 1m20s Aug 18 10:56:32.987: INFO: node status heartbeat is unchanged for 5.000766939s, waiting for 1m20s Aug 18 10:56:33.988: INFO: node status heartbeat is unchanged for 6.001517994s, waiting for 1m20s Aug 18 10:56:34.987: INFO: node status heartbeat is unchanged for 7.000513372s, waiting for 1m20s Aug 18 10:56:35.988: INFO: node status heartbeat is unchanged for 8.001014028s, waiting for 1m20s Aug 18 10:56:36.987: INFO: node status heartbeat is unchanged for 9.000611568s, waiting for 1m20s Aug 18 10:56:37.987: INFO: node status heartbeat is unchanged for 10.000834377s, waiting for 1m20s Aug 18 10:56:38.987: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Aug 18 10:56:38.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:27 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:39.987: INFO: node status heartbeat is unchanged for 999.900547ms, waiting for 1m20s Aug 18 10:56:40.987: INFO: node status heartbeat is unchanged for 1.999869986s, waiting for 1m20s Aug 18 10:56:41.987: INFO: node status heartbeat is unchanged for 3.000576566s, waiting for 1m20s Aug 18 10:56:42.987: INFO: node status heartbeat is unchanged for 3.999955001s, waiting for 1m20s Aug 18 10:56:43.987: INFO: node status heartbeat is unchanged for 5.000401925s, waiting for 1m20s Aug 18 10:56:44.990: INFO: node status heartbeat is unchanged for 6.00288256s, waiting for 1m20s Aug 18 10:56:45.988: INFO: node status heartbeat is unchanged for 7.001170911s, waiting for 1m20s Aug 18 10:56:46.987: INFO: node status heartbeat is unchanged for 8.000677031s, waiting for 1m20s Aug 18 10:56:47.989: INFO: node status heartbeat is unchanged for 9.002099695s, waiting for 1m20s Aug 18 10:56:48.989: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:56:48.992: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:49.987: INFO: node status heartbeat is unchanged for 998.126763ms, waiting for 1m20s Aug 18 10:56:50.988: INFO: node status heartbeat is unchanged for 1.999036495s, waiting for 1m20s Aug 18 10:56:51.988: INFO: node status heartbeat is unchanged for 2.999148812s, waiting for 1m20s Aug 18 10:56:52.988: INFO: node status heartbeat is unchanged for 3.998778056s, waiting for 1m20s Aug 18 10:56:53.988: INFO: node status heartbeat is unchanged for 4.998805749s, waiting for 1m20s Aug 18 10:56:54.989: INFO: node status heartbeat is unchanged for 6.000519551s, waiting for 1m20s Aug 18 10:56:55.987: INFO: node status heartbeat is unchanged for 6.998482804s, waiting for 1m20s Aug 18 10:56:56.987: INFO: node status heartbeat is unchanged for 7.997717144s, waiting for 1m20s Aug 18 10:56:57.987: INFO: node status heartbeat is unchanged for 8.99819196s, waiting for 1m20s Aug 18 10:56:58.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:56:58.991: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:56:59.988: INFO: node status heartbeat is unchanged for 1.000046944s, waiting for 1m20s Aug 18 10:57:00.987: INFO: node status heartbeat is unchanged for 1.999143233s, waiting for 1m20s Aug 18 10:57:01.987: INFO: node status heartbeat is unchanged for 2.999055966s, waiting for 1m20s Aug 18 10:57:02.987: INFO: node status heartbeat is unchanged for 3.999863765s, waiting for 1m20s Aug 18 10:57:03.987: INFO: node status heartbeat is unchanged for 4.999368824s, waiting for 1m20s Aug 18 10:57:04.988: INFO: node status heartbeat is unchanged for 6.000022191s, waiting for 1m20s Aug 18 10:57:05.988: INFO: node status heartbeat is unchanged for 6.999984093s, waiting for 1m20s Aug 18 10:57:06.987: INFO: node status heartbeat is unchanged for 7.999561462s, waiting for 1m20s Aug 18 10:57:07.987: INFO: node status heartbeat is unchanged for 8.999470175s, waiting for 1m20s Aug 18 10:57:08.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:08.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:56:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:09.987: INFO: node status heartbeat is unchanged for 999.829778ms, waiting for 1m20s Aug 18 10:57:10.988: INFO: node status heartbeat is unchanged for 2.000774544s, waiting for 1m20s Aug 18 10:57:11.987: INFO: node status heartbeat is unchanged for 2.999998016s, waiting for 1m20s Aug 18 10:57:12.987: INFO: node status heartbeat is unchanged for 4.00049806s, waiting for 1m20s Aug 18 10:57:13.988: INFO: node status heartbeat is unchanged for 5.000960564s, waiting for 1m20s Aug 18 10:57:14.987: INFO: node status heartbeat is unchanged for 5.999835878s, waiting for 1m20s Aug 18 10:57:15.988: INFO: node status heartbeat is unchanged for 7.001038695s, waiting for 1m20s Aug 18 10:57:16.987: INFO: node status heartbeat is unchanged for 8.000328922s, waiting for 1m20s Aug 18 10:57:17.987: INFO: node status heartbeat is unchanged for 8.999915623s, waiting for 1m20s Aug 18 10:57:18.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:18.991: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:19.987: INFO: node status heartbeat is unchanged for 998.898312ms, waiting for 1m20s Aug 18 10:57:20.987: INFO: node status heartbeat is unchanged for 1.998693552s, waiting for 1m20s Aug 18 10:57:21.989: INFO: node status heartbeat is unchanged for 3.000755787s, waiting for 1m20s Aug 18 10:57:22.988: INFO: node status heartbeat is unchanged for 3.999986835s, waiting for 1m20s Aug 18 10:57:23.988: INFO: node status heartbeat is unchanged for 5.000267869s, waiting for 1m20s Aug 18 10:57:24.987: INFO: node status heartbeat is unchanged for 5.999345016s, waiting for 1m20s Aug 18 10:57:25.987: INFO: node status heartbeat is unchanged for 6.999093712s, waiting for 1m20s Aug 18 10:57:26.987: INFO: node status heartbeat is unchanged for 7.999445082s, waiting for 1m20s Aug 18 10:57:27.989: INFO: node status heartbeat is unchanged for 9.00116839s, waiting for 1m20s Aug 18 10:57:28.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:28.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:29.988: INFO: node status heartbeat is unchanged for 1.001466868s, waiting for 1m20s Aug 18 10:57:30.987: INFO: node status heartbeat is unchanged for 2.000317526s, waiting for 1m20s Aug 18 10:57:31.989: INFO: node status heartbeat is unchanged for 3.002353032s, waiting for 1m20s Aug 18 10:57:32.988: INFO: node status heartbeat is unchanged for 4.001439125s, waiting for 1m20s Aug 18 10:57:33.988: INFO: node status heartbeat is unchanged for 5.00060609s, waiting for 1m20s Aug 18 10:57:34.987: INFO: node status heartbeat is unchanged for 6.000234788s, waiting for 1m20s Aug 18 10:57:35.987: INFO: node status heartbeat is unchanged for 7.00052628s, waiting for 1m20s Aug 18 10:57:36.988: INFO: node status heartbeat is unchanged for 8.000997719s, waiting for 1m20s Aug 18 10:57:37.988: INFO: node status heartbeat is unchanged for 9.001051321s, waiting for 1m20s Aug 18 10:57:38.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:38.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:39.987: INFO: node status heartbeat is unchanged for 999.842706ms, waiting for 1m20s Aug 18 10:57:40.987: INFO: node status heartbeat is unchanged for 2.000238948s, waiting for 1m20s Aug 18 10:57:41.987: INFO: node status heartbeat is unchanged for 3.000367701s, waiting for 1m20s Aug 18 10:57:42.988: INFO: node status heartbeat is unchanged for 4.001667401s, waiting for 1m20s Aug 18 10:57:43.988: INFO: node status heartbeat is unchanged for 5.000849324s, waiting for 1m20s Aug 18 10:57:44.990: INFO: node status heartbeat is unchanged for 6.002945383s, waiting for 1m20s Aug 18 10:57:45.987: INFO: node status heartbeat is unchanged for 6.999897641s, waiting for 1m20s Aug 18 10:57:46.987: INFO: node status heartbeat is unchanged for 8.000586066s, waiting for 1m20s Aug 18 10:57:47.989: INFO: node status heartbeat is unchanged for 9.002338172s, waiting for 1m20s Aug 18 10:57:48.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:48.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:49.987: INFO: node status heartbeat is unchanged for 999.500075ms, waiting for 1m20s Aug 18 10:57:50.988: INFO: node status heartbeat is unchanged for 2.00026663s, waiting for 1m20s Aug 18 10:57:51.987: INFO: node status heartbeat is unchanged for 2.999268987s, waiting for 1m20s Aug 18 10:57:52.988: INFO: node status heartbeat is unchanged for 4.000515646s, waiting for 1m20s Aug 18 10:57:53.987: INFO: node status heartbeat is unchanged for 4.99917342s, waiting for 1m20s Aug 18 10:57:54.987: INFO: node status heartbeat is unchanged for 5.999126491s, waiting for 1m20s Aug 18 10:57:55.987: INFO: node status heartbeat is unchanged for 6.999591768s, waiting for 1m20s Aug 18 10:57:56.987: INFO: node status heartbeat is unchanged for 7.999537789s, waiting for 1m20s Aug 18 10:57:57.988: INFO: node status heartbeat is unchanged for 9.000566608s, waiting for 1m20s Aug 18 10:57:58.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:57:58.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:57:59.987: INFO: node status heartbeat is unchanged for 999.48611ms, waiting for 1m20s Aug 18 10:58:00.989: INFO: node status heartbeat is unchanged for 2.001511505s, waiting for 1m20s Aug 18 10:58:01.987: INFO: node status heartbeat is unchanged for 3.000150197s, waiting for 1m20s Aug 18 10:58:02.989: INFO: node status heartbeat is unchanged for 4.00150495s, waiting for 1m20s Aug 18 10:58:03.987: INFO: node status heartbeat is unchanged for 5.000351268s, waiting for 1m20s Aug 18 10:58:04.990: INFO: node status heartbeat is unchanged for 6.002637192s, waiting for 1m20s Aug 18 10:58:05.987: INFO: node status heartbeat is unchanged for 7.000285361s, waiting for 1m20s Aug 18 10:58:06.988: INFO: node status heartbeat is unchanged for 8.000759609s, waiting for 1m20s Aug 18 10:58:07.989: INFO: node status heartbeat is unchanged for 9.001492647s, waiting for 1m20s Aug 18 10:58:08.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:08.989: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:57:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:09.989: INFO: node status heartbeat is unchanged for 1.002206503s, waiting for 1m20s Aug 18 10:58:10.987: INFO: node status heartbeat is unchanged for 2.000791691s, waiting for 1m20s Aug 18 10:58:11.989: INFO: node status heartbeat is unchanged for 3.0020926s, waiting for 1m20s Aug 18 10:58:12.987: INFO: node status heartbeat is unchanged for 4.000330703s, waiting for 1m20s Aug 18 10:58:13.988: INFO: node status heartbeat is unchanged for 5.00156733s, waiting for 1m20s Aug 18 10:58:14.988: INFO: node status heartbeat is unchanged for 6.001786861s, waiting for 1m20s Aug 18 10:58:15.988: INFO: node status heartbeat is unchanged for 7.00109428s, waiting for 1m20s Aug 18 10:58:16.989: INFO: node status heartbeat is unchanged for 8.002525522s, waiting for 1m20s Aug 18 10:58:17.989: INFO: node status heartbeat is unchanged for 9.001969599s, waiting for 1m20s Aug 18 10:58:18.990: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:18.993: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:19.988: INFO: node status heartbeat is unchanged for 997.331588ms, waiting for 1m20s Aug 18 10:58:20.987: INFO: node status heartbeat is unchanged for 1.996551275s, waiting for 1m20s Aug 18 10:58:21.987: INFO: node status heartbeat is unchanged for 2.997145628s, waiting for 1m20s Aug 18 10:58:22.988: INFO: node status heartbeat is unchanged for 3.997325101s, waiting for 1m20s Aug 18 10:58:23.987: INFO: node status heartbeat is unchanged for 4.996429391s, waiting for 1m20s Aug 18 10:58:24.988: INFO: node status heartbeat is unchanged for 5.997450001s, waiting for 1m20s Aug 18 10:58:25.986: INFO: node status heartbeat is unchanged for 6.9959063s, waiting for 1m20s Aug 18 10:58:26.987: INFO: node status heartbeat is unchanged for 7.996302595s, waiting for 1m20s Aug 18 10:58:27.987: INFO: node status heartbeat is unchanged for 8.996495714s, waiting for 1m20s Aug 18 10:58:28.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:28.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:29.987: INFO: node status heartbeat is unchanged for 999.395064ms, waiting for 1m20s Aug 18 10:58:30.987: INFO: node status heartbeat is unchanged for 1.999549091s, waiting for 1m20s Aug 18 10:58:31.986: INFO: node status heartbeat is unchanged for 2.999166422s, waiting for 1m20s Aug 18 10:58:32.987: INFO: node status heartbeat is unchanged for 4.000086266s, waiting for 1m20s Aug 18 10:58:33.987: INFO: node status heartbeat is unchanged for 4.999867429s, waiting for 1m20s Aug 18 10:58:34.987: INFO: node status heartbeat is unchanged for 5.999481281s, waiting for 1m20s Aug 18 10:58:35.987: INFO: node status heartbeat is unchanged for 6.999450763s, waiting for 1m20s Aug 18 10:58:36.987: INFO: node status heartbeat is unchanged for 8.000154207s, waiting for 1m20s Aug 18 10:58:37.988: INFO: node status heartbeat is unchanged for 9.000304855s, waiting for 1m20s Aug 18 10:58:38.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:38.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:39.987: INFO: node status heartbeat is unchanged for 999.629743ms, waiting for 1m20s Aug 18 10:58:40.987: INFO: node status heartbeat is unchanged for 1.999580725s, waiting for 1m20s Aug 18 10:58:41.987: INFO: node status heartbeat is unchanged for 2.999691617s, waiting for 1m20s Aug 18 10:58:42.988: INFO: node status heartbeat is unchanged for 4.000774929s, waiting for 1m20s Aug 18 10:58:43.987: INFO: node status heartbeat is unchanged for 5.000260576s, waiting for 1m20s Aug 18 10:58:44.988: INFO: node status heartbeat is unchanged for 6.000985704s, waiting for 1m20s Aug 18 10:58:45.987: INFO: node status heartbeat is unchanged for 6.999722164s, waiting for 1m20s Aug 18 10:58:46.988: INFO: node status heartbeat is unchanged for 8.000765259s, waiting for 1m20s Aug 18 10:58:47.989: INFO: node status heartbeat is unchanged for 9.002091555s, waiting for 1m20s Aug 18 10:58:48.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:48.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:49.989: INFO: node status heartbeat is unchanged for 1.001826783s, waiting for 1m20s Aug 18 10:58:50.987: INFO: node status heartbeat is unchanged for 1.999622507s, waiting for 1m20s Aug 18 10:58:51.988: INFO: node status heartbeat is unchanged for 3.000761873s, waiting for 1m20s Aug 18 10:58:52.989: INFO: node status heartbeat is unchanged for 4.001278278s, waiting for 1m20s Aug 18 10:58:53.988: INFO: node status heartbeat is unchanged for 5.000837943s, waiting for 1m20s Aug 18 10:58:54.988: INFO: node status heartbeat is unchanged for 6.00059853s, waiting for 1m20s Aug 18 10:58:55.987: INFO: node status heartbeat is unchanged for 6.99933494s, waiting for 1m20s Aug 18 10:58:56.987: INFO: node status heartbeat is unchanged for 7.999379943s, waiting for 1m20s Aug 18 10:58:57.990: INFO: node status heartbeat is unchanged for 9.00227184s, waiting for 1m20s Aug 18 10:58:58.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:58:58.991: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:58:59.988: INFO: node status heartbeat is unchanged for 999.561944ms, waiting for 1m20s Aug 18 10:59:00.987: INFO: node status heartbeat is unchanged for 1.998732914s, waiting for 1m20s Aug 18 10:59:01.988: INFO: node status heartbeat is unchanged for 3.000047916s, waiting for 1m20s Aug 18 10:59:02.988: INFO: node status heartbeat is unchanged for 3.999397081s, waiting for 1m20s Aug 18 10:59:03.988: INFO: node status heartbeat is unchanged for 5.000289213s, waiting for 1m20s Aug 18 10:59:04.988: INFO: node status heartbeat is unchanged for 5.999766037s, waiting for 1m20s Aug 18 10:59:05.988: INFO: node status heartbeat is unchanged for 7.000245212s, waiting for 1m20s Aug 18 10:59:06.988: INFO: node status heartbeat is unchanged for 7.99979641s, waiting for 1m20s Aug 18 10:59:07.988: INFO: node status heartbeat is unchanged for 8.999947918s, waiting for 1m20s Aug 18 10:59:08.987: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:08.989: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:58:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:09.988: INFO: node status heartbeat is unchanged for 1.001108289s, waiting for 1m20s Aug 18 10:59:10.988: INFO: node status heartbeat is unchanged for 2.001094407s, waiting for 1m20s Aug 18 10:59:11.987: INFO: node status heartbeat is unchanged for 3.000598535s, waiting for 1m20s Aug 18 10:59:12.989: INFO: node status heartbeat is unchanged for 4.002409461s, waiting for 1m20s Aug 18 10:59:13.989: INFO: node status heartbeat is unchanged for 5.002365751s, waiting for 1m20s Aug 18 10:59:14.987: INFO: node status heartbeat is unchanged for 6.000255784s, waiting for 1m20s Aug 18 10:59:15.987: INFO: node status heartbeat is unchanged for 7.000481601s, waiting for 1m20s Aug 18 10:59:16.989: INFO: node status heartbeat is unchanged for 8.002335633s, waiting for 1m20s Aug 18 10:59:17.989: INFO: node status heartbeat is unchanged for 9.002781309s, waiting for 1m20s Aug 18 10:59:18.990: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:18.992: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:19.989: INFO: node status heartbeat is unchanged for 999.444778ms, waiting for 1m20s Aug 18 10:59:20.989: INFO: node status heartbeat is unchanged for 1.999195797s, waiting for 1m20s Aug 18 10:59:21.988: INFO: node status heartbeat is unchanged for 2.998333552s, waiting for 1m20s Aug 18 10:59:22.989: INFO: node status heartbeat is unchanged for 3.999842645s, waiting for 1m20s Aug 18 10:59:23.988: INFO: node status heartbeat is unchanged for 4.998325454s, waiting for 1m20s Aug 18 10:59:24.991: INFO: node status heartbeat is unchanged for 6.001182265s, waiting for 1m20s Aug 18 10:59:25.987: INFO: node status heartbeat is unchanged for 6.997863621s, waiting for 1m20s Aug 18 10:59:26.987: INFO: node status heartbeat is unchanged for 7.997677679s, waiting for 1m20s Aug 18 10:59:27.987: INFO: node status heartbeat is unchanged for 8.997934074s, waiting for 1m20s Aug 18 10:59:28.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:28.991: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:18 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:29.987: INFO: node status heartbeat is unchanged for 999.020686ms, waiting for 1m20s Aug 18 10:59:30.988: INFO: node status heartbeat is unchanged for 1.999405821s, waiting for 1m20s Aug 18 10:59:31.990: INFO: node status heartbeat is unchanged for 3.001254453s, waiting for 1m20s Aug 18 10:59:32.988: INFO: node status heartbeat is unchanged for 3.999719244s, waiting for 1m20s Aug 18 10:59:33.989: INFO: node status heartbeat is unchanged for 5.000389573s, waiting for 1m20s Aug 18 10:59:34.989: INFO: node status heartbeat is unchanged for 6.000446359s, waiting for 1m20s Aug 18 10:59:35.988: INFO: node status heartbeat is unchanged for 6.999999642s, waiting for 1m20s Aug 18 10:59:36.989: INFO: node status heartbeat is unchanged for 8.000106995s, waiting for 1m20s Aug 18 10:59:37.987: INFO: node status heartbeat is unchanged for 8.998461504s, waiting for 1m20s Aug 18 10:59:38.989: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:38.992: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:28 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:39.987: INFO: node status heartbeat is unchanged for 998.168959ms, waiting for 1m20s Aug 18 10:59:40.988: INFO: node status heartbeat is unchanged for 1.998577473s, waiting for 1m20s Aug 18 10:59:41.988: INFO: node status heartbeat is unchanged for 2.999173597s, waiting for 1m20s Aug 18 10:59:42.988: INFO: node status heartbeat is unchanged for 3.999310987s, waiting for 1m20s Aug 18 10:59:43.988: INFO: node status heartbeat is unchanged for 4.999358524s, waiting for 1m20s Aug 18 10:59:44.987: INFO: node status heartbeat is unchanged for 5.998122838s, waiting for 1m20s Aug 18 10:59:45.988: INFO: node status heartbeat is unchanged for 6.998813256s, waiting for 1m20s Aug 18 10:59:46.990: INFO: node status heartbeat is unchanged for 8.00074174s, waiting for 1m20s Aug 18 10:59:47.988: INFO: node status heartbeat is unchanged for 8.998895466s, waiting for 1m20s Aug 18 10:59:48.989: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:48.992: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:49.989: INFO: node status heartbeat is unchanged for 999.82521ms, waiting for 1m20s Aug 18 10:59:50.987: INFO: node status heartbeat is unchanged for 1.99770914s, waiting for 1m20s Aug 18 10:59:51.989: INFO: node status heartbeat is unchanged for 2.999607589s, waiting for 1m20s Aug 18 10:59:52.989: INFO: node status heartbeat is unchanged for 3.999531827s, waiting for 1m20s Aug 18 10:59:53.988: INFO: node status heartbeat is unchanged for 4.999234002s, waiting for 1m20s Aug 18 10:59:54.988: INFO: node status heartbeat is unchanged for 5.999143159s, waiting for 1m20s Aug 18 10:59:55.988: INFO: node status heartbeat is unchanged for 6.998995298s, waiting for 1m20s Aug 18 10:59:56.990: INFO: node status heartbeat is unchanged for 8.000966012s, waiting for 1m20s Aug 18 10:59:57.989: INFO: node status heartbeat is unchanged for 8.99965856s, waiting for 1m20s Aug 18 10:59:58.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 10:59:58.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 10:59:59.989: INFO: node status heartbeat is unchanged for 1.001664659s, waiting for 1m20s Aug 18 11:00:00.988: INFO: node status heartbeat is unchanged for 2.000903016s, waiting for 1m20s Aug 18 11:00:01.988: INFO: node status heartbeat is unchanged for 3.000925753s, waiting for 1m20s Aug 18 11:00:02.989: INFO: node status heartbeat is unchanged for 4.001603049s, waiting for 1m20s Aug 18 11:00:03.990: INFO: node status heartbeat is unchanged for 5.002338063s, waiting for 1m20s Aug 18 11:00:04.987: INFO: node status heartbeat is unchanged for 5.999446193s, waiting for 1m20s Aug 18 11:00:05.988: INFO: node status heartbeat is unchanged for 7.000670854s, waiting for 1m20s Aug 18 11:00:06.988: INFO: node status heartbeat is unchanged for 8.000372818s, waiting for 1m20s Aug 18 11:00:07.989: INFO: node status heartbeat is unchanged for 9.001826916s, waiting for 1m20s Aug 18 11:00:08.988: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 11:00:08.990: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 10:59:58 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 11:00:09.988: INFO: node status heartbeat is unchanged for 1.000319345s, waiting for 1m20s Aug 18 11:00:10.987: INFO: node status heartbeat is unchanged for 1.999743216s, waiting for 1m20s Aug 18 11:00:11.987: INFO: node status heartbeat is unchanged for 2.999770673s, waiting for 1m20s Aug 18 11:00:12.987: INFO: node status heartbeat is unchanged for 3.999782614s, waiting for 1m20s Aug 18 11:00:13.988: INFO: node status heartbeat is unchanged for 5.000311847s, waiting for 1m20s Aug 18 11:00:14.988: INFO: node status heartbeat is unchanged for 6.000466933s, waiting for 1m20s Aug 18 11:00:15.987: INFO: node status heartbeat is unchanged for 6.998869188s, waiting for 1m20s Aug 18 11:00:16.990: INFO: node status heartbeat is unchanged for 8.00190121s, waiting for 1m20s Aug 18 11:00:17.987: INFO: node status heartbeat is unchanged for 8.99926226s, waiting for 1m20s Aug 18 11:00:18.989: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 18 11:00:18.992: INFO:   v1.NodeStatus{    Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-18 08:26:36 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:08 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-08-18 11:00:18 +0000 UTC"},    LastTransitionTime: v1.Time{Time: s"2021-08-18 08:22:49 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-18 08:23:32 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},    },    Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},    ... // 5 identical fields   } Aug 18 11:00:19.989: INFO: node status heartbeat is unchanged for 999.957699ms, waiting for 1m20s Aug 18 11:00:20.987: INFO: node status heartbeat is unchanged for 1.997882844s, waiting for 1m20s Aug 18 11:00:21.988: INFO: node status heartbeat is unchanged for 2.999556444s, waiting for 1m20s Aug 18 11:00:22.990: INFO: node status heartbeat is unchanged for 4.000945983s, waiting for 1m20s Aug 18 11:00:23.987: INFO: node status heartbeat is unchanged for 4.998115999s, waiting for 1m20s Aug 18 11:00:23.990: INFO: node status heartbeat is unchanged for 5.000598891s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 11:00:23.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7770" for this suite. • [SLOW TEST:300.049 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 18 10:55:12.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 STEP: getting restart delay when capped Aug 18 11:06:47.939: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-08-18 11:01:31 +0000 UTC restartedAt=2021-08-18 11:06:46 +0000 UTC (5m15s) Aug 18 11:12:00.062: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-08-18 11:06:51 +0000 UTC restartedAt=2021-08-18 11:11:58 +0000 UTC (5m7s) Aug 18 11:17:16.232: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-08-18 11:12:03 +0000 UTC restartedAt=2021-08-18 11:17:14 +0000 UTC (5m11s) STEP: getting restart delay after a capped delay Aug 18 11:22:28.400: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-08-18 11:17:19 +0000 UTC restartedAt=2021-08-18 11:22:27 +0000 UTC (5m8s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 18 11:22:28.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4270" for this suite. • [SLOW TEST:1635.862 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":106,"failed":0} Aug 18 11:22:28.413: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":4,"skipped":1208,"failed":0} Aug 18 11:00:24.012: INFO: Running AfterSuite actions on all nodes Aug 18 11:22:28.476: INFO: Running AfterSuite actions on node 1 Aug 18 11:22:28.476: INFO: Skipping dumping logs from cluster Ran 30 of 5484 Specs in 1646.467 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 27m27.851295163s Test Suite Passed