Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630105611 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes Aug 27 23:06:53.488: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.493: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 23:06:53.518: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 23:06:53.574: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 27 23:06:53.574: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 27 23:06:53.574: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 23:06:53.574: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Aug 27 23:06:53.574: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 23:06:53.585: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Aug 27 23:06:53.585: INFO: e2e test version: v1.19.14 Aug 27 23:06:53.586: INFO: kube-apiserver version: v1.19.8 Aug 27 23:06:53.587: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.591: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Aug 27 23:06:53.590: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.610: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Aug 27 23:06:53.602: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.623: INFO: Cluster IP family: ipv4 SS ------------------------------ Aug 27 23:06:53.603: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.624: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Aug 27 23:06:53.606: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.629: INFO: Cluster IP family: ipv4 S ------------------------------ Aug 27 23:06:53.608: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.629: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Aug 27 23:06:53.616: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.636: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Aug 27 23:06:53.619: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.640: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Aug 27 23:06:53.624: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.646: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Aug 27 23:06:53.627: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:06:53.647: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Aug 27 23:06:53.671: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.673: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 Aug 27 23:06:53.676: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:06:53.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5139" for this suite. S [SKIPPING] [0.042 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Aug 27 23:06:53.725: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.727: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:06:53.730: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:06:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-4904" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:06:53.740019 37 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 168 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001600d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003e26750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0035986e0, 0xc003e26750, 0xc0035986e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003e26750, 0x6a0318deab074b, 0xc003e26778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0xc8, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0035807e0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c4b740, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c4b740, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000640770, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc003e276c0, 0xc003b8ba40, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003b8ba40, 0x0, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003b8ba40, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00177a000, 0xc003b8ba40, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00177a000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00177a000, 0xc000636030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016a280, 0x7ff60c6be310, 0xc001c09380, 0x4c2a88e, 0x14, 0xc000853170, 0x3, 0x3, 0x53a0560, 0xc00015e900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc001c09380, 0x4c2a88e, 0x14, 0xc002f92b80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc001c09380, 0x4c2a88e, 0x14, 0xc000665c00, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001c09380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001c09380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001c09380, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Aug 27 23:06:53.874: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.876: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:06:53.878: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:06:53.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-4162" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:06:53.889215 26 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 274 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001600d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004506750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00097ff00, 0xc004506750, 0xc00097ff00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004506750, 0x6a0318e7910ff4, 0xc004506778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0xc5, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc000978f30, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001970240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001970240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00179b7a0, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0045076c0, 0xc002eb1680, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002eb1680, 0x0, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002eb1680, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003a78000, 0xc002eb1680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003a78000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003a78000, 0xc003a70030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016a280, 0x7f8c673e3370, 0xc000803800, 0x4c2a88e, 0x14, 0xc0027886c0, 0x3, 0x3, 0x53a0560, 0xc00015e900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc000803800, 0x4c2a88e, 0x14, 0xc000c70840, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc000803800, 0x4c2a88e, 0x14, 0xc0035bdec0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000803800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000803800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000803800, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:06:53.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7793" for this suite. •SSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Aug 27 23:06:54.787: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:54.788: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Aug 27 23:06:54.790: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:06:54.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-6491" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Aug 27 23:06:54.099: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:54.101: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:01.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8550" for this suite. • [SLOW TEST:7.084 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Aug 27 23:06:53.689: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.692: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Aug 27 23:06:53.709: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-7314" to be "Succeeded or Failed" Aug 27 23:06:53.710: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.703085ms Aug 27 23:06:55.713: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004424429s Aug 27 23:06:57.716: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007383249s Aug 27 23:06:59.719: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010819871s Aug 27 23:07:01.723: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014626356s Aug 27 23:07:01.723: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:01.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7314" for this suite. • [SLOW TEST:8.092 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Aug 27 23:06:53.952: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.955: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Aug 27 23:06:53.965: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod Aug 27 23:06:54.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6293 create -f -' Aug 27 23:06:54.455: INFO: stderr: "" Aug 27 23:06:54.455: INFO: stdout: "secret/test-secret created\n" Aug 27 23:06:54.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6293 create -f -' Aug 27 23:06:54.717: INFO: stderr: "" Aug 27 23:06:54.717: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Aug 27 23:07:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-6293 logs secret-test-pod test-container' Aug 27 23:07:03.221: INFO: stderr: "" Aug 27 23:07:03.221: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:03.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6293" for this suite. • [SLOW TEST:9.298 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:01.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:04.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6367" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Aug 27 23:06:54.245: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod Aug 27 23:06:54.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3905 create -f -' Aug 27 23:06:54.629: INFO: stderr: "" Aug 27 23:06:54.629: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Aug 27 23:07:04.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3905 logs dapi-test-pod test-container' Aug 27 23:07:04.787: INFO: stderr: "" Aug 27 23:07:04.787: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3905\nMY_POD_IP=10.244.3.179\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Aug 27 23:07:04.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3905 logs dapi-test-pod test-container' Aug 27 23:07:04.948: INFO: stderr: "" Aug 27 23:07:04.948: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3905\nMY_POD_IP=10.244.3.179\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:04.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3905" for this suite. • [SLOW TEST:10.734 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Aug 27 23:06:54.980: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75" in namespace "security-context-test-3878" to be "Succeeded or Failed" Aug 27 23:06:54.983: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748968ms Aug 27 23:06:56.986: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005969837s Aug 27 23:06:58.989: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00886991s Aug 27 23:07:00.992: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011803686s Aug 27 23:07:02.995: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014852337s Aug 27 23:07:04.997: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017360644s Aug 27 23:07:04.997: INFO: Pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75" satisfied condition "Succeeded or Failed" Aug 27 23:07:05.003: INFO: Got logs for pod "busybox-privileged-true-988dda58-2362-4a7b-ad2c-c0692a00dc75": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3878" for this suite. • [SLOW TEST:10.066 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5084" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":3,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:55.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Aug 27 23:06:55.266: INFO: Waiting up to 5m0s for pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca" in namespace "security-context-test-1420" to be "Succeeded or Failed" Aug 27 23:06:55.269: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.861004ms Aug 27 23:06:57.273: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006290143s Aug 27 23:06:59.276: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00906619s Aug 27 23:07:01.278: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011505399s Aug 27 23:07:03.282: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015156359s Aug 27 23:07:05.285: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018341316s Aug 27 23:07:05.285: INFO: Pod "busybox-user-0-876b4af0-b293-4b7e-96f4-c4001eff79ca" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1420" for this suite. • [SLOW TEST:10.059 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ SSS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:07:05.377: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9314" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:07:05.386511 26 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 274 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001600d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004506750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0049d1d20, 0xc004506750, 0xc0049d1d20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004506750, 0x6a031b94db5b69, 0xc004506778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0x99, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004b0a7b0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001970240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001970240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00179b7a0, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0045076c0, 0xc002eb1770, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002eb1770, 0x0, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002eb1770, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003a78000, 0xc002eb1770, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003a78000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003a78000, 0xc003a70030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016a280, 0x7f8c673e3370, 0xc000803800, 0x4c2a88e, 0x14, 0xc0027886c0, 0x3, 0x3, 0x53a0560, 0xc00015e900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc000803800, 0x4c2a88e, 0x14, 0xc000c70840, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc000803800, 0x4c2a88e, 0x14, 0xc0035bdec0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000803800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc000803800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000803800, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:07:05.451: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6342" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:07:05.462237 31 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 236 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc000222078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00086e750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0040a60a0, 0xc00086e750, 0xc0040a60a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00086e750, 0x6a031b995ebd8f, 0xc00086e778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0xa6, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003644ab0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001908900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001908900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ab5b50, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00086f6c0, 0xc003d4f950, 0x52eb480, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003d4f950, 0x0, 0x52eb480, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003d4f950, 0x52eb480, 0xc0002608c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001a20000, 0xc003d4f950, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001a20000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001a20000, 0xc003e96050) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000244230, 0x7fd2d6583cd0, 0xc001659500, 0x4c2a88e, 0x14, 0xc0034fcb70, 0x3, 0x3, 0x53a0560, 0xc0002608c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc001659500, 0x4c2a88e, 0x14, 0xc001026280, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc001659500, 0x4c2a88e, 0x14, 0xc002a11920, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001659500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001659500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001659500, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:07:05.477: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8318" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:07:05.486389 24 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 258 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001ade750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003cd0e20, 0xc001ade750, 0xc003cd0e20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc001ade750, 0x6a031b9acf8888, 0xc001ade778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0x83, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003df80c0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000caa720, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000caa720, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0003c9880, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc001adf6c0, 0xc004315860, 0x52eb480, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc004315860, 0x0, 0x52eb480, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc004315860, 0x52eb480, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001516140, 0xc004315860, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001516140, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001516140, 0xc000fdc058) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f09c37fc4c8, 0xc00397be00, 0x4c2a88e, 0x14, 0xc003aa1dd0, 0x3, 0x3, 0x53a0560, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc00397be00, 0x4c2a88e, 0x14, 0xc003be6240, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc00397be00, 0x4c2a88e, 0x14, 0xc003b80c00, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00397be00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00397be00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00397be00, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:01.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 27 23:07:05.598: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8227" for this suite. •SSS ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":2,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:06.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Aug 27 23:07:06.215: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:06.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-9587" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:03.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:09.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-539" for this suite. • [SLOW TEST:6.090 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":2,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:09.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-606" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Aug 27 23:07:05.556: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-8720" to be "Succeeded or Failed" Aug 27 23:07:05.559: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.832954ms Aug 27 23:07:07.562: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0058491s Aug 27 23:07:09.565: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009400315s Aug 27 23:07:11.572: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016361467s Aug 27 23:07:11.572: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:11.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8720" for this suite. • [SLOW TEST:6.065 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:10.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:12.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9186" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":3,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:12.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Aug 27 23:07:12.429: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:12.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8560" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0827 23:07:12.441674 38 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 129 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42f2440, 0x754a840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42f2440, 0x754a840) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001600d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000ccc750, 0xcb5200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003b997a0, 0xc000ccc750, 0xc003b997a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000ccc750, 0x6a031d39602899, 0xc000ccc778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7718ac0, 0x8b, 0x4f92d7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002b31f80, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00080ea80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00080ea80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000692a60, 0x52eb480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000ccd6c0, 0xc003129b30, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003129b30, 0x0, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003129b30, 0x52eb480, 0xc00015e900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0040d8000, 0xc003129b30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0040d8000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0040d8000, 0xc000712038) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016a280, 0x7f61dbc379d0, 0xc001500a80, 0x4c2a88e, 0x14, 0xc0024b8330, 0x3, 0x3, 0x53a0560, 0xc00015e900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52f00e0, 0xc001500a80, 0x4c2a88e, 0x14, 0xc002e77140, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52f00e0, 0xc001500a80, 0x4c2a88e, 0x14, 0xc003238fa0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001500a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001500a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001500a80, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:06.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:14.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6355" for this suite. • [SLOW TEST:8.041 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:11.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Aug 27 23:07:11.886: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-fa38501e-047c-4e6e-8161-f1de077bc89a" in namespace "security-context-test-3089" to be "Succeeded or Failed" Aug 27 23:07:11.888: INFO: Pod "alpine-nnp-nil-fa38501e-047c-4e6e-8161-f1de077bc89a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19446ms Aug 27 23:07:13.891: INFO: Pod "alpine-nnp-nil-fa38501e-047c-4e6e-8161-f1de077bc89a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004922057s Aug 27 23:07:15.895: INFO: Pod "alpine-nnp-nil-fa38501e-047c-4e6e-8161-f1de077bc89a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009233314s Aug 27 23:07:15.895: INFO: Pod "alpine-nnp-nil-fa38501e-047c-4e6e-8161-f1de077bc89a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:15.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3089" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Aug 27 23:07:15.773: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-4624 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 23:07:15.773: INFO: >>> kubeConfig: /root/.kube/config Aug 27 23:07:15.890: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-4624 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 23:07:15.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Aug 27 23:07:16.012: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-4624 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 23:07:16.012: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:16.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-4624" for this suite. • [SLOW TEST:10.407 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":3,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Aug 27 23:06:54.136: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:54.137: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:16.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2943" for this suite. • [SLOW TEST:22.077 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:16.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:16.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4827" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":4,"skipped":1116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:09.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Aug 27 23:07:09.812: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6" in namespace "security-context-test-937" to be "Succeeded or Failed" Aug 27 23:07:09.814: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.749095ms Aug 27 23:07:11.818: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00518679s Aug 27 23:07:13.820: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007598601s Aug 27 23:07:15.823: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010152018s Aug 27 23:07:17.826: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01382154s Aug 27 23:07:17.826: INFO: Pod "alpine-nnp-true-60f3de44-8279-4685-bab0-e85a51e2f6b6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:17.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-937" for this suite. • [SLOW TEST:8.056 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:14.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:19.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5303" for this suite. • [SLOW TEST:5.071 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":5,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Aug 27 23:07:19.650: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:12.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Aug 27 23:07:12.517: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a" in namespace "security-context-test-7949" to be "Succeeded or Failed" Aug 27 23:07:12.519: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793061ms Aug 27 23:07:14.523: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005090636s Aug 27 23:07:16.525: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0078431s Aug 27 23:07:18.529: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011388417s Aug 27 23:07:20.533: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a": Phase="Failed", Reason="", readiness=false. Elapsed: 8.01575586s Aug 27 23:07:20.533: INFO: Pod "busybox-readonly-true-92017078-6155-47a5-bf1d-f8d4179a498a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:20.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7949" for this suite. • [SLOW TEST:8.058 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":679,"failed":0} Aug 27 23:07:20.544: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Aug 27 23:06:53.809: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.811: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-86dc42f3-e010-4949-b4f0-f641b26979e3 in namespace container-probe-280 Aug 27 23:06:59.830: INFO: Started pod liveness-86dc42f3-e010-4949-b4f0-f641b26979e3 in namespace container-probe-280 STEP: checking the pod's current state and verifying that restartCount is present Aug 27 23:06:59.832: INFO: Initial restart count of pod liveness-86dc42f3-e010-4949-b4f0-f641b26979e3 is 0 Aug 27 23:07:21.873: INFO: Restart count of pod container-probe-280/liveness-86dc42f3-e010-4949-b4f0-f641b26979e3 is now 1 (22.04060258s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:21.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-280" for this suite. • [SLOW TEST:28.100 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":55,"failed":0} Aug 27 23:07:21.892: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:16.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8244" for this suite. • [SLOW TEST:6.047 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":4,"skipped":446,"failed":0} Aug 27 23:07:22.251: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:17.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:07:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7325" for this suite. • [SLOW TEST:6.058 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":4,"skipped":617,"failed":0} Aug 27 23:07:23.980: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:16.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Aug 27 23:07:16.629: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Aug 27 23:07:16.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9760 create -f -' Aug 27 23:07:17.000: INFO: stderr: "" Aug 27 23:07:17.000: INFO: stdout: "pod/liveness-exec created\n" Aug 27 23:07:17.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9760 create -f -' Aug 27 23:07:17.264: INFO: stderr: "" Aug 27 23:07:17.264: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Aug 27 23:07:21.272: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:23.276: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:23.276: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:25.279: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:25.279: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:27.282: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:27.282: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:29.286: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:29.286: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:31.291: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:31.291: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:33.294: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:33.294: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:35.301: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:35.301: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:37.304: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:37.304: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:39.309: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:39.309: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:41.312: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:41.312: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:43.317: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:43.317: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:45.319: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:45.320: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:47.322: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:47.322: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:49.326: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:49.326: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:51.329: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:51.330: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:53.333: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:53.333: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:55.340: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:55.340: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:57.342: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:57.342: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:07:59.346: INFO: Pod: liveness-http, restart count:0 Aug 27 23:07:59.346: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:01.349: INFO: Pod: liveness-http, restart count:0 Aug 27 23:08:01.349: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:03.353: INFO: Pod: liveness-http, restart count:1 Aug 27 23:08:03.353: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:03.353: INFO: Saw liveness-http restart, succeeded... Aug 27 23:08:05.355: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:07.359: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:09.362: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:11.370: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:13.373: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:15.377: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:17.381: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:19.383: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:21.387: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:23.391: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:25.394: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:27.396: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:29.399: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:31.403: INFO: Pod: liveness-exec, restart count:0 Aug 27 23:08:33.406: INFO: Pod: liveness-exec, restart count:1 Aug 27 23:08:33.406: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:08:33.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9760" for this suite. • [SLOW TEST:76.810 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":5,"skipped":1292,"failed":0} Aug 27 23:08:33.417: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:53.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Aug 27 23:06:53.656: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 23:06:53.660: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-d9b24b00-0cff-4828-be09-2413c88186e1 in namespace container-probe-1679 Aug 27 23:06:59.685: INFO: Started pod liveness-d9b24b00-0cff-4828-be09-2413c88186e1 in namespace container-probe-1679 STEP: checking the pod's current state and verifying that restartCount is present Aug 27 23:06:59.688: INFO: Initial restart count of pod liveness-d9b24b00-0cff-4828-be09-2413c88186e1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:11:00.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1679" for this suite. • [SLOW TEST:246.572 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":3,"failed":0} Aug 27 23:11:00.208: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:16.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Aug 27 23:07:16.466: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Aug 27 23:07:17.477: INFO: node status heartbeat is unchanged for 1.003618315s, waiting for 1m20s Aug 27 23:07:18.476: INFO: node status heartbeat is unchanged for 2.002996061s, waiting for 1m20s Aug 27 23:07:19.476: INFO: node status heartbeat is unchanged for 3.003031067s, waiting for 1m20s Aug 27 23:07:20.477: INFO: node status heartbeat is unchanged for 4.003111701s, waiting for 1m20s Aug 27 23:07:21.478: INFO: node status heartbeat is unchanged for 5.004356514s, waiting for 1m20s Aug 27 23:07:22.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:07:22.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "c1e38e80ea114a5f96601202301ce842", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "e769e86d-15c0-442c-a93b-bcc6c33ff1cd", KernelVersion: "3.10.0-1160.36.2.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 21 identical elements {Names: []string{"k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b", "k8s.gcr.io/kube-scheduler:v1.19.8"}, SizeBytes: 46510430}, {Names: []string{"localhost:30500/sriov-device-plugin@sha256:d7300ccf7ff3e9cea2111d275143b8050618bbc1d1ffe41f46286b1696261243", "nfvpe/sriov-device-plugin:latest", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44393508}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", + "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", + }, + SizeBytes: 42321438, + }, {Names: []string{"kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7", "kubernetesui/metrics-scraper:v1.0.6"}, SizeBytes: 34548789}, {Names: []string{"quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee", "quay.io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, ... // 3 identical elements {Names: []string{"quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd", "quay.io/coreos/prometheus-config-reloader:v0.40.0"}, SizeBytes: 10131705}, {Names: []string{"jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2", "jimmidyson/configmap-reload:v0.3.0"}, SizeBytes: 9700438}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", + "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", + }, + SizeBytes: 6757579, + }, {Names: []string{"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb", "appropriate/curl:edge"}, SizeBytes: 5654234}, {Names: []string{"alpine@sha256:de25c7fc6c4f3a27c7f0c2dff454e4671823a34d88abd533f210848d527e0fbb", "alpine:3.12"}, SizeBytes: 5581415}, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0", + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, + { + Names: []string{ + "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", + "busybox:1.29", + }, + SizeBytes: 1154361, + }, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Aug 27 23:07:23.477: INFO: node status heartbeat is unchanged for 999.541407ms, waiting for 1m20s Aug 27 23:07:24.477: INFO: node status heartbeat is unchanged for 1.999483131s, waiting for 1m20s Aug 27 23:07:25.478: INFO: node status heartbeat is unchanged for 3.000525111s, waiting for 1m20s Aug 27 23:07:26.478: INFO: node status heartbeat is unchanged for 4.000589117s, waiting for 1m20s Aug 27 23:07:27.477: INFO: node status heartbeat is unchanged for 5.000164986s, waiting for 1m20s Aug 27 23:07:28.477: INFO: node status heartbeat is unchanged for 6.00003363s, waiting for 1m20s Aug 27 23:07:29.477: INFO: node status heartbeat is unchanged for 7.000086567s, waiting for 1m20s Aug 27 23:07:30.478: INFO: node status heartbeat is unchanged for 8.00077313s, waiting for 1m20s Aug 27 23:07:31.477: INFO: node status heartbeat is unchanged for 8.999918223s, waiting for 1m20s Aug 27 23:07:32.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:07:32.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:07:33.477: INFO: node status heartbeat is unchanged for 1.000193398s, waiting for 1m20s Aug 27 23:07:34.477: INFO: node status heartbeat is unchanged for 1.999804561s, waiting for 1m20s Aug 27 23:07:35.479: INFO: node status heartbeat is unchanged for 3.002063937s, waiting for 1m20s Aug 27 23:07:36.478: INFO: node status heartbeat is unchanged for 4.000535065s, waiting for 1m20s Aug 27 23:07:37.477: INFO: node status heartbeat is unchanged for 5.000247049s, waiting for 1m20s Aug 27 23:07:38.478: INFO: node status heartbeat is unchanged for 6.000589551s, waiting for 1m20s Aug 27 23:07:39.478: INFO: node status heartbeat is unchanged for 7.001453637s, waiting for 1m20s Aug 27 23:07:40.480: INFO: node status heartbeat is unchanged for 8.002647938s, waiting for 1m20s Aug 27 23:07:41.478: INFO: node status heartbeat is unchanged for 9.001107761s, waiting for 1m20s Aug 27 23:07:42.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:07:42.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:07:43.476: INFO: node status heartbeat is unchanged for 998.8817ms, waiting for 1m20s Aug 27 23:07:44.478: INFO: node status heartbeat is unchanged for 2.000485608s, waiting for 1m20s Aug 27 23:07:45.477: INFO: node status heartbeat is unchanged for 2.999479618s, waiting for 1m20s Aug 27 23:07:46.478: INFO: node status heartbeat is unchanged for 4.000091426s, waiting for 1m20s Aug 27 23:07:47.478: INFO: node status heartbeat is unchanged for 5.00021393s, waiting for 1m20s Aug 27 23:07:48.480: INFO: node status heartbeat is unchanged for 6.002278277s, waiting for 1m20s Aug 27 23:07:49.477: INFO: node status heartbeat is unchanged for 6.99921159s, waiting for 1m20s Aug 27 23:07:50.477: INFO: node status heartbeat is unchanged for 7.999835806s, waiting for 1m20s Aug 27 23:07:51.478: INFO: node status heartbeat is unchanged for 9.000232011s, waiting for 1m20s Aug 27 23:07:52.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:07:52.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:07:53.477: INFO: node status heartbeat is unchanged for 998.90599ms, waiting for 1m20s Aug 27 23:07:54.479: INFO: node status heartbeat is unchanged for 2.000931386s, waiting for 1m20s Aug 27 23:07:55.477: INFO: node status heartbeat is unchanged for 2.999249996s, waiting for 1m20s Aug 27 23:07:56.477: INFO: node status heartbeat is unchanged for 3.998618002s, waiting for 1m20s Aug 27 23:07:57.477: INFO: node status heartbeat is unchanged for 4.999476347s, waiting for 1m20s Aug 27 23:07:58.477: INFO: node status heartbeat is unchanged for 5.999278032s, waiting for 1m20s Aug 27 23:07:59.478: INFO: node status heartbeat is unchanged for 6.999525648s, waiting for 1m20s Aug 27 23:08:00.479: INFO: node status heartbeat is unchanged for 8.000682449s, waiting for 1m20s Aug 27 23:08:01.478: INFO: node status heartbeat is unchanged for 9.000004412s, waiting for 1m20s Aug 27 23:08:02.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:02.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:07:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:03.478: INFO: node status heartbeat is unchanged for 1.00042059s, waiting for 1m20s Aug 27 23:08:04.477: INFO: node status heartbeat is unchanged for 1.99960226s, waiting for 1m20s Aug 27 23:08:05.478: INFO: node status heartbeat is unchanged for 3.000222803s, waiting for 1m20s Aug 27 23:08:06.479: INFO: node status heartbeat is unchanged for 4.001750683s, waiting for 1m20s Aug 27 23:08:07.477: INFO: node status heartbeat is unchanged for 4.999904469s, waiting for 1m20s Aug 27 23:08:08.477: INFO: node status heartbeat is unchanged for 5.999654628s, waiting for 1m20s Aug 27 23:08:09.478: INFO: node status heartbeat is unchanged for 7.000822022s, waiting for 1m20s Aug 27 23:08:10.479: INFO: node status heartbeat is unchanged for 8.001479567s, waiting for 1m20s Aug 27 23:08:11.477: INFO: node status heartbeat is unchanged for 8.999743505s, waiting for 1m20s Aug 27 23:08:12.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:12.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:13.477: INFO: node status heartbeat is unchanged for 999.520368ms, waiting for 1m20s Aug 27 23:08:14.477: INFO: node status heartbeat is unchanged for 1.99966354s, waiting for 1m20s Aug 27 23:08:15.478: INFO: node status heartbeat is unchanged for 3.000360661s, waiting for 1m20s Aug 27 23:08:16.478: INFO: node status heartbeat is unchanged for 3.999740486s, waiting for 1m20s Aug 27 23:08:17.478: INFO: node status heartbeat is unchanged for 4.999970261s, waiting for 1m20s Aug 27 23:08:18.476: INFO: node status heartbeat is unchanged for 5.998528537s, waiting for 1m20s Aug 27 23:08:19.477: INFO: node status heartbeat is unchanged for 6.999698401s, waiting for 1m20s Aug 27 23:08:20.479: INFO: node status heartbeat is unchanged for 8.000840521s, waiting for 1m20s Aug 27 23:08:21.477: INFO: node status heartbeat is unchanged for 8.999393976s, waiting for 1m20s Aug 27 23:08:22.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:22.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:11 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:23.477: INFO: node status heartbeat is unchanged for 999.880859ms, waiting for 1m20s Aug 27 23:08:24.477: INFO: node status heartbeat is unchanged for 1.999776343s, waiting for 1m20s Aug 27 23:08:25.478: INFO: node status heartbeat is unchanged for 3.000794906s, waiting for 1m20s Aug 27 23:08:26.479: INFO: node status heartbeat is unchanged for 4.001430842s, waiting for 1m20s Aug 27 23:08:27.477: INFO: node status heartbeat is unchanged for 4.999683445s, waiting for 1m20s Aug 27 23:08:28.477: INFO: node status heartbeat is unchanged for 5.999427744s, waiting for 1m20s Aug 27 23:08:29.477: INFO: node status heartbeat is unchanged for 6.999593425s, waiting for 1m20s Aug 27 23:08:30.479: INFO: node status heartbeat is unchanged for 8.001146007s, waiting for 1m20s Aug 27 23:08:31.479: INFO: node status heartbeat is unchanged for 9.001686235s, waiting for 1m20s Aug 27 23:08:32.480: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:32.483: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:33.477: INFO: node status heartbeat is unchanged for 997.392444ms, waiting for 1m20s Aug 27 23:08:34.478: INFO: node status heartbeat is unchanged for 1.997920521s, waiting for 1m20s Aug 27 23:08:35.479: INFO: node status heartbeat is unchanged for 2.999341303s, waiting for 1m20s Aug 27 23:08:36.479: INFO: node status heartbeat is unchanged for 3.998972809s, waiting for 1m20s Aug 27 23:08:37.478: INFO: node status heartbeat is unchanged for 4.998217098s, waiting for 1m20s Aug 27 23:08:38.476: INFO: node status heartbeat is unchanged for 5.996708841s, waiting for 1m20s Aug 27 23:08:39.479: INFO: node status heartbeat is unchanged for 6.999086565s, waiting for 1m20s Aug 27 23:08:40.479: INFO: node status heartbeat is unchanged for 7.9994952s, waiting for 1m20s Aug 27 23:08:41.477: INFO: node status heartbeat is unchanged for 8.997867027s, waiting for 1m20s Aug 27 23:08:42.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:42.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:43.477: INFO: node status heartbeat is unchanged for 998.688521ms, waiting for 1m20s Aug 27 23:08:44.480: INFO: node status heartbeat is unchanged for 2.00182236s, waiting for 1m20s Aug 27 23:08:45.478: INFO: node status heartbeat is unchanged for 2.999815871s, waiting for 1m20s Aug 27 23:08:46.477: INFO: node status heartbeat is unchanged for 3.998779221s, waiting for 1m20s Aug 27 23:08:47.478: INFO: node status heartbeat is unchanged for 4.999895157s, waiting for 1m20s Aug 27 23:08:48.477: INFO: node status heartbeat is unchanged for 5.998951947s, waiting for 1m20s Aug 27 23:08:49.477: INFO: node status heartbeat is unchanged for 6.998945464s, waiting for 1m20s Aug 27 23:08:50.477: INFO: node status heartbeat is unchanged for 7.999280217s, waiting for 1m20s Aug 27 23:08:51.477: INFO: node status heartbeat is unchanged for 8.999453294s, waiting for 1m20s Aug 27 23:08:52.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:08:52.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:08:53.477: INFO: node status heartbeat is unchanged for 1.000171467s, waiting for 1m20s Aug 27 23:08:54.483: INFO: node status heartbeat is unchanged for 2.006079131s, waiting for 1m20s Aug 27 23:08:55.480: INFO: node status heartbeat is unchanged for 3.00363946s, waiting for 1m20s Aug 27 23:08:56.480: INFO: node status heartbeat is unchanged for 4.002931634s, waiting for 1m20s Aug 27 23:08:57.479: INFO: node status heartbeat is unchanged for 5.002641061s, waiting for 1m20s Aug 27 23:08:58.477: INFO: node status heartbeat is unchanged for 6.000022348s, waiting for 1m20s Aug 27 23:08:59.478: INFO: node status heartbeat is unchanged for 7.001246978s, waiting for 1m20s Aug 27 23:09:00.476: INFO: node status heartbeat is unchanged for 7.999835201s, waiting for 1m20s Aug 27 23:09:01.479: INFO: node status heartbeat is unchanged for 9.001936271s, waiting for 1m20s Aug 27 23:09:02.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:09:02.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:08:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:03.478: INFO: node status heartbeat is unchanged for 1.000205959s, waiting for 1m20s Aug 27 23:09:04.480: INFO: node status heartbeat is unchanged for 2.002654777s, waiting for 1m20s Aug 27 23:09:05.477: INFO: node status heartbeat is unchanged for 2.999588353s, waiting for 1m20s Aug 27 23:09:06.478: INFO: node status heartbeat is unchanged for 4.000073884s, waiting for 1m20s Aug 27 23:09:07.478: INFO: node status heartbeat is unchanged for 4.999987982s, waiting for 1m20s Aug 27 23:09:08.477: INFO: node status heartbeat is unchanged for 5.999329975s, waiting for 1m20s Aug 27 23:09:09.477: INFO: node status heartbeat is unchanged for 6.999662495s, waiting for 1m20s Aug 27 23:09:10.479: INFO: node status heartbeat is unchanged for 8.001358217s, waiting for 1m20s Aug 27 23:09:11.478: INFO: node status heartbeat is unchanged for 9.000606879s, waiting for 1m20s Aug 27 23:09:12.478: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Aug 27 23:09:12.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:01 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:13.477: INFO: node status heartbeat is unchanged for 999.267153ms, waiting for 1m20s Aug 27 23:09:14.479: INFO: node status heartbeat is unchanged for 2.001356633s, waiting for 1m20s Aug 27 23:09:15.480: INFO: node status heartbeat is unchanged for 3.00230337s, waiting for 1m20s Aug 27 23:09:16.478: INFO: node status heartbeat is unchanged for 4.000438972s, waiting for 1m20s Aug 27 23:09:17.478: INFO: node status heartbeat is unchanged for 4.99999259s, waiting for 1m20s Aug 27 23:09:18.477: INFO: node status heartbeat is unchanged for 5.999454829s, waiting for 1m20s Aug 27 23:09:19.479: INFO: node status heartbeat is unchanged for 7.001490536s, waiting for 1m20s Aug 27 23:09:20.480: INFO: node status heartbeat is unchanged for 8.001969753s, waiting for 1m20s Aug 27 23:09:21.477: INFO: node status heartbeat is unchanged for 8.999627162s, waiting for 1m20s Aug 27 23:09:22.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:09:22.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:23.481: INFO: node status heartbeat is unchanged for 1.002971222s, waiting for 1m20s Aug 27 23:09:24.480: INFO: node status heartbeat is unchanged for 2.001472492s, waiting for 1m20s Aug 27 23:09:25.479: INFO: node status heartbeat is unchanged for 3.000673477s, waiting for 1m20s Aug 27 23:09:26.479: INFO: node status heartbeat is unchanged for 4.001139264s, waiting for 1m20s Aug 27 23:09:27.479: INFO: node status heartbeat is unchanged for 5.000427756s, waiting for 1m20s Aug 27 23:09:28.480: INFO: node status heartbeat is unchanged for 6.00156456s, waiting for 1m20s Aug 27 23:09:29.478: INFO: node status heartbeat is unchanged for 7.000095588s, waiting for 1m20s Aug 27 23:09:30.479: INFO: node status heartbeat is unchanged for 8.000533654s, waiting for 1m20s Aug 27 23:09:31.479: INFO: node status heartbeat is unchanged for 9.000733077s, waiting for 1m20s Aug 27 23:09:32.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:09:32.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:33.478: INFO: node status heartbeat is unchanged for 1.000233425s, waiting for 1m20s Aug 27 23:09:34.480: INFO: node status heartbeat is unchanged for 2.002121894s, waiting for 1m20s Aug 27 23:09:35.480: INFO: node status heartbeat is unchanged for 3.002326886s, waiting for 1m20s Aug 27 23:09:36.480: INFO: node status heartbeat is unchanged for 4.002213212s, waiting for 1m20s Aug 27 23:09:37.478: INFO: node status heartbeat is unchanged for 5.000837469s, waiting for 1m20s Aug 27 23:09:38.480: INFO: node status heartbeat is unchanged for 6.002556784s, waiting for 1m20s Aug 27 23:09:39.478: INFO: node status heartbeat is unchanged for 7.000201307s, waiting for 1m20s Aug 27 23:09:40.478: INFO: node status heartbeat is unchanged for 8.000328603s, waiting for 1m20s Aug 27 23:09:41.477: INFO: node status heartbeat is unchanged for 8.999498167s, waiting for 1m20s Aug 27 23:09:42.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:09:42.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:43.479: INFO: node status heartbeat is unchanged for 1.000828612s, waiting for 1m20s Aug 27 23:09:44.480: INFO: node status heartbeat is unchanged for 2.00200057s, waiting for 1m20s Aug 27 23:09:45.478: INFO: node status heartbeat is unchanged for 3.000117633s, waiting for 1m20s Aug 27 23:09:46.477: INFO: node status heartbeat is unchanged for 3.998848622s, waiting for 1m20s Aug 27 23:09:47.477: INFO: node status heartbeat is unchanged for 4.999358691s, waiting for 1m20s Aug 27 23:09:48.479: INFO: node status heartbeat is unchanged for 6.001195115s, waiting for 1m20s Aug 27 23:09:49.478: INFO: node status heartbeat is unchanged for 7.000378899s, waiting for 1m20s Aug 27 23:09:50.477: INFO: node status heartbeat is unchanged for 7.999054238s, waiting for 1m20s Aug 27 23:09:51.477: INFO: node status heartbeat is unchanged for 8.999589127s, waiting for 1m20s Aug 27 23:09:52.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:09:52.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:09:53.479: INFO: node status heartbeat is unchanged for 1.001136159s, waiting for 1m20s Aug 27 23:09:54.478: INFO: node status heartbeat is unchanged for 2.000189927s, waiting for 1m20s Aug 27 23:09:55.480: INFO: node status heartbeat is unchanged for 3.002455159s, waiting for 1m20s Aug 27 23:09:56.478: INFO: node status heartbeat is unchanged for 3.999875186s, waiting for 1m20s Aug 27 23:09:57.478: INFO: node status heartbeat is unchanged for 5.000816433s, waiting for 1m20s Aug 27 23:09:58.479: INFO: node status heartbeat is unchanged for 6.000913415s, waiting for 1m20s Aug 27 23:09:59.480: INFO: node status heartbeat is unchanged for 7.002631972s, waiting for 1m20s Aug 27 23:10:00.478: INFO: node status heartbeat is unchanged for 8.00056381s, waiting for 1m20s Aug 27 23:10:01.477: INFO: node status heartbeat is unchanged for 8.999790264s, waiting for 1m20s Aug 27 23:10:02.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:02.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:09:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:03.477: INFO: node status heartbeat is unchanged for 999.550351ms, waiting for 1m20s Aug 27 23:10:04.479: INFO: node status heartbeat is unchanged for 2.000851679s, waiting for 1m20s Aug 27 23:10:05.478: INFO: node status heartbeat is unchanged for 2.999891905s, waiting for 1m20s Aug 27 23:10:06.478: INFO: node status heartbeat is unchanged for 4.000329045s, waiting for 1m20s Aug 27 23:10:07.478: INFO: node status heartbeat is unchanged for 5.000245074s, waiting for 1m20s Aug 27 23:10:08.480: INFO: node status heartbeat is unchanged for 6.002041735s, waiting for 1m20s Aug 27 23:10:09.478: INFO: node status heartbeat is unchanged for 6.999603074s, waiting for 1m20s Aug 27 23:10:10.479: INFO: node status heartbeat is unchanged for 8.001512479s, waiting for 1m20s Aug 27 23:10:11.480: INFO: node status heartbeat is unchanged for 9.00167524s, waiting for 1m20s Aug 27 23:10:12.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:12.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:13.477: INFO: node status heartbeat is unchanged for 999.783468ms, waiting for 1m20s Aug 27 23:10:14.481: INFO: node status heartbeat is unchanged for 2.003166244s, waiting for 1m20s Aug 27 23:10:15.479: INFO: node status heartbeat is unchanged for 3.00187604s, waiting for 1m20s Aug 27 23:10:16.478: INFO: node status heartbeat is unchanged for 4.000620945s, waiting for 1m20s Aug 27 23:10:17.478: INFO: node status heartbeat is unchanged for 5.000480477s, waiting for 1m20s Aug 27 23:10:18.478: INFO: node status heartbeat is unchanged for 6.000403539s, waiting for 1m20s Aug 27 23:10:19.479: INFO: node status heartbeat is unchanged for 7.001902236s, waiting for 1m20s Aug 27 23:10:20.479: INFO: node status heartbeat is unchanged for 8.001510486s, waiting for 1m20s Aug 27 23:10:21.478: INFO: node status heartbeat is unchanged for 9.000564024s, waiting for 1m20s Aug 27 23:10:22.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:22.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:23.479: INFO: node status heartbeat is unchanged for 1.001279608s, waiting for 1m20s Aug 27 23:10:24.480: INFO: node status heartbeat is unchanged for 2.00208952s, waiting for 1m20s Aug 27 23:10:25.478: INFO: node status heartbeat is unchanged for 3.000656278s, waiting for 1m20s Aug 27 23:10:26.478: INFO: node status heartbeat is unchanged for 4.000492189s, waiting for 1m20s Aug 27 23:10:27.478: INFO: node status heartbeat is unchanged for 5.000238536s, waiting for 1m20s Aug 27 23:10:28.479: INFO: node status heartbeat is unchanged for 6.001704633s, waiting for 1m20s Aug 27 23:10:29.477: INFO: node status heartbeat is unchanged for 6.999787613s, waiting for 1m20s Aug 27 23:10:30.479: INFO: node status heartbeat is unchanged for 8.001355724s, waiting for 1m20s Aug 27 23:10:31.479: INFO: node status heartbeat is unchanged for 9.001248866s, waiting for 1m20s Aug 27 23:10:32.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:32.483: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:33.477: INFO: node status heartbeat is unchanged for 999.130427ms, waiting for 1m20s Aug 27 23:10:34.479: INFO: node status heartbeat is unchanged for 2.000975269s, waiting for 1m20s Aug 27 23:10:35.479: INFO: node status heartbeat is unchanged for 3.000778396s, waiting for 1m20s Aug 27 23:10:36.479: INFO: node status heartbeat is unchanged for 4.001304358s, waiting for 1m20s Aug 27 23:10:37.478: INFO: node status heartbeat is unchanged for 5.00000524s, waiting for 1m20s Aug 27 23:10:38.479: INFO: node status heartbeat is unchanged for 6.001409965s, waiting for 1m20s Aug 27 23:10:39.479: INFO: node status heartbeat is unchanged for 7.001453345s, waiting for 1m20s Aug 27 23:10:40.479: INFO: node status heartbeat is unchanged for 8.001275295s, waiting for 1m20s Aug 27 23:10:41.478: INFO: node status heartbeat is unchanged for 9.000111533s, waiting for 1m20s Aug 27 23:10:42.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:42.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:43.478: INFO: node status heartbeat is unchanged for 1.000648252s, waiting for 1m20s Aug 27 23:10:44.478: INFO: node status heartbeat is unchanged for 1.999737716s, waiting for 1m20s Aug 27 23:10:45.480: INFO: node status heartbeat is unchanged for 3.001934667s, waiting for 1m20s Aug 27 23:10:46.478: INFO: node status heartbeat is unchanged for 4.000555343s, waiting for 1m20s Aug 27 23:10:47.478: INFO: node status heartbeat is unchanged for 4.999834852s, waiting for 1m20s Aug 27 23:10:48.477: INFO: node status heartbeat is unchanged for 5.999086476s, waiting for 1m20s Aug 27 23:10:49.478: INFO: node status heartbeat is unchanged for 7.000288518s, waiting for 1m20s Aug 27 23:10:50.478: INFO: node status heartbeat is unchanged for 7.999788375s, waiting for 1m20s Aug 27 23:10:51.477: INFO: node status heartbeat is unchanged for 8.999561745s, waiting for 1m20s Aug 27 23:10:52.478: INFO: node status heartbeat is unchanged for 9.999736734s, waiting for 1m20s Aug 27 23:10:53.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:10:53.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:10:54.478: INFO: node status heartbeat is unchanged for 1.000563759s, waiting for 1m20s Aug 27 23:10:55.479: INFO: node status heartbeat is unchanged for 2.00191886s, waiting for 1m20s Aug 27 23:10:56.477: INFO: node status heartbeat is unchanged for 3.000110771s, waiting for 1m20s Aug 27 23:10:57.477: INFO: node status heartbeat is unchanged for 3.999775056s, waiting for 1m20s Aug 27 23:10:58.477: INFO: node status heartbeat is unchanged for 4.999811054s, waiting for 1m20s Aug 27 23:10:59.477: INFO: node status heartbeat is unchanged for 6.000187282s, waiting for 1m20s Aug 27 23:11:00.478: INFO: node status heartbeat is unchanged for 7.000960841s, waiting for 1m20s Aug 27 23:11:01.478: INFO: node status heartbeat is unchanged for 8.000351025s, waiting for 1m20s Aug 27 23:11:02.477: INFO: node status heartbeat is unchanged for 9.000292219s, waiting for 1m20s Aug 27 23:11:03.478: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:03.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:10:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:04.480: INFO: node status heartbeat is unchanged for 1.002015716s, waiting for 1m20s Aug 27 23:11:05.478: INFO: node status heartbeat is unchanged for 1.999423571s, waiting for 1m20s Aug 27 23:11:06.480: INFO: node status heartbeat is unchanged for 3.001373705s, waiting for 1m20s Aug 27 23:11:07.478: INFO: node status heartbeat is unchanged for 3.999716504s, waiting for 1m20s Aug 27 23:11:08.480: INFO: node status heartbeat is unchanged for 5.00131521s, waiting for 1m20s Aug 27 23:11:09.478: INFO: node status heartbeat is unchanged for 5.999296972s, waiting for 1m20s Aug 27 23:11:10.478: INFO: node status heartbeat is unchanged for 6.999385298s, waiting for 1m20s Aug 27 23:11:11.477: INFO: node status heartbeat is unchanged for 7.998921389s, waiting for 1m20s Aug 27 23:11:12.477: INFO: node status heartbeat is unchanged for 8.998338411s, waiting for 1m20s Aug 27 23:11:13.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:13.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:14.477: INFO: node status heartbeat is unchanged for 1.00023457s, waiting for 1m20s Aug 27 23:11:15.477: INFO: node status heartbeat is unchanged for 2.000430839s, waiting for 1m20s Aug 27 23:11:16.478: INFO: node status heartbeat is unchanged for 3.00085946s, waiting for 1m20s Aug 27 23:11:17.478: INFO: node status heartbeat is unchanged for 4.001529007s, waiting for 1m20s Aug 27 23:11:18.480: INFO: node status heartbeat is unchanged for 5.003719658s, waiting for 1m20s Aug 27 23:11:19.480: INFO: node status heartbeat is unchanged for 6.00342041s, waiting for 1m20s Aug 27 23:11:20.482: INFO: node status heartbeat is unchanged for 7.004920312s, waiting for 1m20s Aug 27 23:11:21.479: INFO: node status heartbeat is unchanged for 8.002038448s, waiting for 1m20s Aug 27 23:11:22.478: INFO: node status heartbeat is unchanged for 9.001095631s, waiting for 1m20s Aug 27 23:11:23.480: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:23.483: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:24.479: INFO: node status heartbeat is unchanged for 998.85763ms, waiting for 1m20s Aug 27 23:11:25.478: INFO: node status heartbeat is unchanged for 1.998668051s, waiting for 1m20s Aug 27 23:11:26.479: INFO: node status heartbeat is unchanged for 2.999029636s, waiting for 1m20s Aug 27 23:11:27.477: INFO: node status heartbeat is unchanged for 3.997659101s, waiting for 1m20s Aug 27 23:11:28.477: INFO: node status heartbeat is unchanged for 4.997347228s, waiting for 1m20s Aug 27 23:11:29.477: INFO: node status heartbeat is unchanged for 5.997492289s, waiting for 1m20s Aug 27 23:11:30.478: INFO: node status heartbeat is unchanged for 6.998633464s, waiting for 1m20s Aug 27 23:11:31.477: INFO: node status heartbeat is unchanged for 7.997616031s, waiting for 1m20s Aug 27 23:11:32.478: INFO: node status heartbeat is unchanged for 8.998478028s, waiting for 1m20s Aug 27 23:11:33.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:33.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:34.477: INFO: node status heartbeat is unchanged for 999.433803ms, waiting for 1m20s Aug 27 23:11:35.477: INFO: node status heartbeat is unchanged for 1.999457764s, waiting for 1m20s Aug 27 23:11:36.477: INFO: node status heartbeat is unchanged for 2.999866361s, waiting for 1m20s Aug 27 23:11:37.477: INFO: node status heartbeat is unchanged for 3.999939499s, waiting for 1m20s Aug 27 23:11:38.478: INFO: node status heartbeat is unchanged for 5.000683171s, waiting for 1m20s Aug 27 23:11:39.477: INFO: node status heartbeat is unchanged for 6.000126673s, waiting for 1m20s Aug 27 23:11:40.476: INFO: node status heartbeat is unchanged for 6.999109123s, waiting for 1m20s Aug 27 23:11:41.476: INFO: node status heartbeat is unchanged for 7.999133567s, waiting for 1m20s Aug 27 23:11:42.477: INFO: node status heartbeat is unchanged for 8.999675733s, waiting for 1m20s Aug 27 23:11:43.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:43.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:44.477: INFO: node status heartbeat is unchanged for 999.812524ms, waiting for 1m20s Aug 27 23:11:45.477: INFO: node status heartbeat is unchanged for 1.999426274s, waiting for 1m20s Aug 27 23:11:46.477: INFO: node status heartbeat is unchanged for 2.99952155s, waiting for 1m20s Aug 27 23:11:47.478: INFO: node status heartbeat is unchanged for 4.000271897s, waiting for 1m20s Aug 27 23:11:48.478: INFO: node status heartbeat is unchanged for 5.000310541s, waiting for 1m20s Aug 27 23:11:49.479: INFO: node status heartbeat is unchanged for 6.001822909s, waiting for 1m20s Aug 27 23:11:50.477: INFO: node status heartbeat is unchanged for 6.999473514s, waiting for 1m20s Aug 27 23:11:51.478: INFO: node status heartbeat is unchanged for 8.000389877s, waiting for 1m20s Aug 27 23:11:52.477: INFO: node status heartbeat is unchanged for 8.999657179s, waiting for 1m20s Aug 27 23:11:53.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:11:53.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:11:54.477: INFO: node status heartbeat is unchanged for 1.000486113s, waiting for 1m20s Aug 27 23:11:55.477: INFO: node status heartbeat is unchanged for 2.000364965s, waiting for 1m20s Aug 27 23:11:56.477: INFO: node status heartbeat is unchanged for 3.000101549s, waiting for 1m20s Aug 27 23:11:57.477: INFO: node status heartbeat is unchanged for 4.000483659s, waiting for 1m20s Aug 27 23:11:58.477: INFO: node status heartbeat is unchanged for 5.000125223s, waiting for 1m20s Aug 27 23:11:59.478: INFO: node status heartbeat is unchanged for 6.001329367s, waiting for 1m20s Aug 27 23:12:00.477: INFO: node status heartbeat is unchanged for 6.999798331s, waiting for 1m20s Aug 27 23:12:01.477: INFO: node status heartbeat is unchanged for 7.999829721s, waiting for 1m20s Aug 27 23:12:02.477: INFO: node status heartbeat is unchanged for 9.00041724s, waiting for 1m20s Aug 27 23:12:03.478: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Aug 27 23:12:03.480: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:11:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:12:04.477: INFO: node status heartbeat is unchanged for 999.090565ms, waiting for 1m20s Aug 27 23:12:05.478: INFO: node status heartbeat is unchanged for 2.000289236s, waiting for 1m20s Aug 27 23:12:06.478: INFO: node status heartbeat is unchanged for 3.000496794s, waiting for 1m20s Aug 27 23:12:07.479: INFO: node status heartbeat is unchanged for 4.001319376s, waiting for 1m20s Aug 27 23:12:08.477: INFO: node status heartbeat is unchanged for 4.999720376s, waiting for 1m20s Aug 27 23:12:09.477: INFO: node status heartbeat is unchanged for 5.999641855s, waiting for 1m20s Aug 27 23:12:10.477: INFO: node status heartbeat is unchanged for 6.999935187s, waiting for 1m20s Aug 27 23:12:11.478: INFO: node status heartbeat is unchanged for 8.000503403s, waiting for 1m20s Aug 27 23:12:12.478: INFO: node status heartbeat is unchanged for 9.000535476s, waiting for 1m20s Aug 27 23:12:13.477: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Aug 27 23:12:13.481: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269628928}, s: "196552372Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884628480}, s: "174692020Ki", Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:51:49 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-08-27 23:12:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:09 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-08-27 20:48:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Aug 27 23:12:14.478: INFO: node status heartbeat is unchanged for 1.000497609s, waiting for 1m20s Aug 27 23:12:15.479: INFO: node status heartbeat is unchanged for 2.0020919s, waiting for 1m20s Aug 27 23:12:16.477: INFO: node status heartbeat is unchanged for 2.9992873s, waiting for 1m20s Aug 27 23:12:16.480: INFO: node status heartbeat is unchanged for 3.002286039s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:12:16.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2435" for this suite. • [SLOW TEST:300.053 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":331,"failed":0} Aug 27 23:12:16.502: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:06:54.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 STEP: getting restart delay-0 Aug 27 23:08:51.491: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-08-27 23:08:06 +0000 UTC restartedAt=2021-08-27 23:08:50 +0000 UTC (44s) STEP: getting restart delay-1 Aug 27 23:10:19.826: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-08-27 23:08:55 +0000 UTC restartedAt=2021-08-27 23:10:18 +0000 UTC (1m23s) STEP: getting restart delay-2 Aug 27 23:13:16.543: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-08-27 23:10:23 +0000 UTC restartedAt=2021-08-27 23:13:15 +0000 UTC (2m52s) STEP: updating the image Aug 27 23:13:17.054: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Aug 27 23:13:43.123: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-08-27 23:13:28 +0000 UTC restartedAt=2021-08-27 23:13:41 +0000 UTC (13s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:13:43.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1999" for this suite. • [SLOW TEST:408.873 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":240,"failed":0} Aug 27 23:13:43.135: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 23:07:05.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 STEP: getting restart delay when capped Aug 27 23:18:50.218: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-08-27 23:13:39 +0000 UTC restartedAt=2021-08-27 23:18:49 +0000 UTC (5m10s) Aug 27 23:24:09.536: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-08-27 23:18:54 +0000 UTC restartedAt=2021-08-27 23:24:08 +0000 UTC (5m14s) Aug 27 23:29:22.869: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-08-27 23:24:13 +0000 UTC restartedAt=2021-08-27 23:29:22 +0000 UTC (5m9s) STEP: getting restart delay after a capped delay Aug 27 23:34:36.105: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-08-27 23:29:27 +0000 UTC restartedAt=2021-08-27 23:34:35 +0000 UTC (5m8s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 23:34:36.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3088" for this suite. • [SLOW TEST:1650.354 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":3,"skipped":934,"failed":0} Aug 27 23:34:36.122: INFO: Running AfterSuite actions on all nodes Aug 27 23:34:36.123: INFO: Running AfterSuite actions on node 1 Aug 27 23:34:36.123: INFO: Skipping dumping logs from cluster Ran 30 of 5484 Specs in 1662.733 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 27m44.151289613s Test Suite Passed