Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618416340 - Will randomize all specs Will run 4994 specs Running in parallel across 25 nodes Apr 14 16:05:41.784: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.787: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 14 16:05:41.814: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 14 16:05:41.856: INFO: The status of Pod cmk-init-discover-node1-ppgf5 is Succeeded, skipping waiting Apr 14 16:05:41.856: INFO: The status of Pod cmk-init-discover-node2-tqmv6 is Succeeded, skipping waiting Apr 14 16:05:41.856: INFO: 40 / 43 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 14 16:05:41.856: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 14 16:05:41.856: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 14 16:05:41.871: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 14 16:05:41.871: INFO: e2e test version: v1.18.17 Apr 14 16:05:41.872: INFO: kube-apiserver version: v1.18.8 Apr 14 16:05:41.872: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.880: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.874: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.895: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.874: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.897: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.878: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.899: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.880: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.900: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.880: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.902: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.880: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.904: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.886: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.912: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.887: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.912: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.887: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.913: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.894: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.915: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.896: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.918: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.896: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.919: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.894: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.919: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.900: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.923: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.899: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.924: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.908: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.930: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.916: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.937: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.912: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.938: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.912: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.936: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.920: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.939: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 14 16:05:41.919: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.940: INFO: Cluster IP family: ipv4 Apr 14 16:05:41.920: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.940: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 14 16:05:41.923: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.945: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 14 16:05:41.944: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:41.965: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 14 16:05:42.034: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.042: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-1105 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:42.147: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:42.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1105" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:42.156955 23 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 169 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0025cc7d0, 0xc000fda380, 0x7ff3612de008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0025cc8c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000a5f320, 0xc0025cc8c8, 0xc000a5f320, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0025cc8c8, 0x452108, 0xc0025cc8b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x84, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0026b9470, 0x25, 0xc002307e00, 0xc00286d200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c59320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c59320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0010549d0, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0025cd6c8, 0xc00185eff0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00185eff0, 0x0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00185eff0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0024f0000, 0xc00185eff0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0024f0000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0024f0000, 0xc0024ea018) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001982d0, 0x7ff35e38ed80, 0xc00026da00, 0x495020e, 0x14, 0xc002ebef30, 0x3, 0x3, 0x529bcc0, 0xc00015c940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc00026da00, 0x495020e, 0x14, 0xc002779000, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc00026da00, 0x495020e, 0x14, 0xc00300ca00, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00026da00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc00026da00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00026da00, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.142 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 14 16:05:42.104: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.113: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-4272 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:42.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4272" for this suite. •SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 14 16:05:42.144: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.152: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-9846 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:42.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9846" for this suite. •SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-1851 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:42.297: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1851" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:42.307879 23 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 169 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0025cc7d0, 0xc000fb8380, 0x7ff3612ded98) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0025cc8c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00335c160, 0xc0025cc8c8, 0xc00335c160, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0025cc8c8, 0x452108, 0xc0025cc8b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x96, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00296f530, 0x25, 0xc003064660, 0xc002839200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c59320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c59320, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0010549d0, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0025cd6c8, 0xc00185f0e0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00185f0e0, 0x0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00185f0e0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0024f0000, 0xc00185f0e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0024f0000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0024f0000, 0xc0024ea018) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001982d0, 0x7ff35e38ed80, 0xc00026da00, 0x495020e, 0x14, 0xc002ebef30, 0x3, 0x3, 0x529bcc0, 0xc00015c940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc00026da00, 0x495020e, 0x14, 0xc002779000, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc00026da00, 0x495020e, 0x14, 0xc00300ca00, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00026da00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc00026da00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00026da00, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.133 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-565 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:42.390: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:42.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-565" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:42.401117 103 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 331 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc002b787d0, 0x7b4e480, 0x7fb3eba0fb28) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002b788c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00334e3e0, 0xc002b788c8, 0xc00334e3e0, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002b788c8, 0x452108, 0xc002b788b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x85, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002f04990, 0x25, 0xc002d87680, 0xc002eff200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000e0a3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000e0a3c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000d0ae98, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc002b796c8, 0xc00201b2c0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00201b2c0, 0x0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00201b2c0, 0x51d23a0, 0xc00015c940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003196280, 0xc00201b2c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003196280, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003196280, 0xc0031720a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001982d0, 0x7fb3e8b70080, 0xc001a11800, 0x495020e, 0x14, 0xc001932f00, 0x3, 0x3, 0x529bcc0, 0xc00015c940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc001a11800, 0x495020e, 0x14, 0xc000581e80, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc001a11800, 0x495020e, 0x14, 0xc000885bc0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a11800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a11800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc001a11800, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.132 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools Apr 14 16:05:43.017: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.026: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-pools-2994 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Apr 14 16:05:43.132: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:43.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-2994" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.789 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not ) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Apr 14 16:05:43.064: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.072: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in localssd-3097 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Apr 14 16:05:43.178: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:43.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-3097" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.828 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not ) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 14 16:05:43.664: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.672: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-3833 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:43.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3833" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 14 16:05:43.765: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.776: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-3370 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:43.882: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:43.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-3370" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:43.892205 77 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 283 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0014127d0, 0xc0004ea000, 0x7f38786956d0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0014128c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00316a0a0, 0xc0014128c8, 0xc00316a0a0, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0014128c8, 0x452108, 0xc0014128b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x7d, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002aa81b0, 0x25, 0xc002d7a420, 0xc000a90000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000ee28a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000ee28a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0006a6d70, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0014136c8, 0xc0020f6e10, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0020f6e10, 0x0, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0020f6e10, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003173180, 0xc0020f6e10, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003173180, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003173180, 0xc001f78418) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ae2d0, 0x7f38754f8900, 0xc001a81e00, 0x495020e, 0x14, 0xc0018e7350, 0x3, 0x3, 0x529bcc0, 0xc000172940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc001a81e00, 0x495020e, 0x14, 0xc0003d46c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc001a81e00, 0x495020e, 0x14, 0xc0011d7640, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a81e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a81e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc001a81e00, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [1.417 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 14 16:05:42.916: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.924: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-7026 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:45.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7026" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:43.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6450 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 Apr 14 16:05:45.417: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:45.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6450" for this suite. S [SKIPPING] [2.282 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:42.295: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.303: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8812 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:48.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8812" for this suite. • [SLOW TEST:6.156 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":1,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:42.232: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.240: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9721 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Apr 14 16:05:42.357: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-9721" to be "Succeeded or Failed" Apr 14 16:05:42.359: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49905ms Apr 14 16:05:44.363: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006146562s Apr 14 16:05:46.368: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01077112s Apr 14 16:05:48.370: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013371793s Apr 14 16:05:50.374: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016907501s Apr 14 16:05:52.377: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019875672s Apr 14 16:05:52.377: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9721" for this suite. • [SLOW TEST:10.183 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:52.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-7144 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:52.881: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:52.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-7144" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:52.890957 67 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 203 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc00369a7d0, 0xc000b1ca80, 0x7fea3a9fe008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00369a8c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003568540, 0xc00369a8c8, 0xc003568540, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00369a8c8, 0x452108, 0xc00369a8b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x92, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002925950, 0x25, 0xc001fcdda0, 0xc0021f5200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c467e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c467e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ee2c58, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00369b6c8, 0xc001a3b1d0, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001a3b1d0, 0x0, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001a3b1d0, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0016248c0, 0xc001a3b1d0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0016248c0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0016248c0, 0xc002069f30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ae2d0, 0x7fea37a7f8a8, 0xc00329c500, 0x495020e, 0x14, 0xc002752e10, 0x3, 0x3, 0x529bcc0, 0xc000172940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc00329c500, 0x495020e, 0x14, 0xc0023a8100, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc00329c500, 0x495020e, 0x14, 0xc0028c10c0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00329c500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc00329c500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00329c500, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.134 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:52.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-8579 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 14 16:05:53.068: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:53.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8579" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0414 16:05:53.078346 67 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 203 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc00369a7d0, 0xc000abe380, 0x7fea3aa17420) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00369a8c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003609ce0, 0xc00369a8c8, 0xc003609ce0, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00369a8c8, 0x452108, 0xc00369a8b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x8a, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0037b6f00, 0x25, 0xc0023e62a0, 0xc00255b800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000c467e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000c467e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ee2c58, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00369b6c8, 0xc001a3af00, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001a3af00, 0x0, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001a3af00, 0x51d23a0, 0xc000172940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0016248c0, 0xc001a3af00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0016248c0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0016248c0, 0xc002069f30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ae2d0, 0x7fea37a7f8a8, 0xc00329c500, 0x495020e, 0x14, 0xc002752e10, 0x3, 0x3, 0x529bcc0, 0xc000172940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc00329c500, 0x495020e, 0x14, 0xc0023a8100, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc00329c500, 0x495020e, 0x14, 0xc0028c10c0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00329c500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc00329c500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00329c500, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.132 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:41.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:41.970: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:41.980: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7677 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Apr 14 16:05:42.099: INFO: Waiting up to 5m0s for pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2" in namespace "security-context-test-7677" to be "Succeeded or Failed" Apr 14 16:05:42.102: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324176ms Apr 14 16:05:44.105: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006714731s Apr 14 16:05:46.109: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00986749s Apr 14 16:05:48.111: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012652334s Apr 14 16:05:50.114: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015291833s Apr 14 16:05:52.117: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018212718s Apr 14 16:05:54.120: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.020828139s Apr 14 16:05:54.120: INFO: Pod "busybox-user-0-94a60e99-6fb5-45e6-8800-828e769cadc2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7677" for this suite. • [SLOW TEST:12.179 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":5,"failed":0} Apr 14 16:05:54.130: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:42.297: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.305: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9880 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Apr 14 16:05:42.425: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def" in namespace "security-context-test-9880" to be "Succeeded or Failed" Apr 14 16:05:42.427: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370391ms Apr 14 16:05:44.430: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005114526s Apr 14 16:05:46.433: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007875501s Apr 14 16:05:48.436: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010688004s Apr 14 16:05:50.438: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013179131s Apr 14 16:05:52.440: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015505464s Apr 14 16:05:54.443: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.018451315s Apr 14 16:05:54.443: INFO: Pod "alpine-nnp-nil-f6bf0d89-02e2-45da-bf69-d2e3d8308def" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:54.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9880" for this suite. • [SLOW TEST:12.185 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 14 16:05:42.263: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.271: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-9789 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:54.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9789" for this suite. • [SLOW TEST:12.220 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":68,"failed":0} Apr 14 16:05:54.465: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":1,"skipped":62,"failed":0} Apr 14 16:05:54.465: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 14 16:05:42.464: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.472: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7777 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:387 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:54.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7777" for this suite. • [SLOW TEST:12.311 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:387 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":91,"failed":0} Apr 14 16:05:54.652: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:42.340: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:42.351: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6660 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Apr 14 16:05:42.469: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98" in namespace "security-context-test-6660" to be "Succeeded or Failed" Apr 14 16:05:42.471: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225496ms Apr 14 16:05:44.475: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005559652s Apr 14 16:05:46.478: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00914368s Apr 14 16:05:48.481: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011727101s Apr 14 16:05:50.483: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014353346s Apr 14 16:05:52.486: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016586818s Apr 14 16:05:54.489: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Pending", Reason="", readiness=false. Elapsed: 12.01981746s Apr 14 16:05:56.492: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.0226527s Apr 14 16:05:56.492: INFO: Pod "alpine-nnp-true-cb0598f5-344d-4c36-9dbe-ee0cb4793e98" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:56.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6660" for this suite. • [SLOW TEST:14.190 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":91,"failed":0} Apr 14 16:05:56.508: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod Apr 14 16:05:41.975: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:41.985: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-privileged-pod-2830 STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Apr 14 16:05:56.114: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2830 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 14 16:05:56.114: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:05:56.364: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-2830 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 14 16:05:56.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Apr 14 16:05:56.522: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2830 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 14 16:05:56.522: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:56.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-2830" for this suite. • [SLOW TEST:14.718 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0} Apr 14 16:05:56.675: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 14 16:05:43.814: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.823: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1348 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:371 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:56.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1348" for this suite. • [SLOW TEST:14.491 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:371 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":173,"failed":0} Apr 14 16:05:57.006: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Apr 14 16:05:43.115: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.123: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-6753 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:136 STEP: creating the pod Apr 14 16:05:43.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-6753' Apr 14 16:05:43.697: INFO: stderr: "" Apr 14 16:05:43.697: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Apr 14 16:05:57.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs dapi-test-pod test-container --namespace=examples-6753' Apr 14 16:05:57.855: INFO: stderr: "" Apr 14 16:05:57.855: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-6753\nMY_POD_IP=10.244.3.37\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Apr 14 16:05:57.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs dapi-test-pod test-container --namespace=examples-6753' Apr 14 16:05:58.002: INFO: stderr: "" Apr 14 16:05:58.002: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-6753\nMY_POD_IP=10.244.3.37\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:58.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6753" for this suite. • [SLOW TEST:15.642 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:136 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":99,"failed":0} Apr 14 16:05:58.011: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 14 16:05:44.515: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:44.524: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6336 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:170 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 14 16:05:59.689: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:05:59.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6336" for this suite. • [SLOW TEST:16.979 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:170 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":309,"failed":0} Apr 14 16:05:59.706: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:44.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8583 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8583" for this suite. • [SLOW TEST:17.829 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":253,"failed":0} Apr 14 16:06:01.856: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:44.415: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:44.424: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2774 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Apr 14 16:05:44.542: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece" in namespace "security-context-test-2774" to be "Succeeded or Failed" Apr 14 16:05:44.545: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591617ms Apr 14 16:05:46.550: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007347997s Apr 14 16:05:48.557: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014631208s Apr 14 16:05:50.560: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01752796s Apr 14 16:05:52.563: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021092739s Apr 14 16:05:54.567: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024371101s Apr 14 16:05:56.570: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02796712s Apr 14 16:05:58.573: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031105746s Apr 14 16:06:00.576: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Pending", Reason="", readiness=false. Elapsed: 16.033817171s Apr 14 16:06:02.579: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece": Phase="Failed", Reason="", readiness=false. Elapsed: 18.037113869s Apr 14 16:06:02.579: INFO: Pod "busybox-readonly-true-b1f2821c-dc66-4b96-b172-e5ec5ac65ece" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:02.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2774" for this suite. • [SLOW TEST:19.951 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":256,"failed":0} Apr 14 16:06:02.591: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:43.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-2265 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:03.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2265" for this suite. • [SLOW TEST:19.816 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":2,"skipped":156,"failed":0} Apr 14 16:06:03.691: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:45.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4573 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Apr 14 16:05:45.878: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-4573" to be "Succeeded or Failed" Apr 14 16:05:45.880: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293286ms Apr 14 16:05:47.884: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005672569s Apr 14 16:05:49.888: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009438306s Apr 14 16:05:51.892: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013502037s Apr 14 16:05:53.895: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016343622s Apr 14 16:05:55.898: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019951591s Apr 14 16:05:57.901: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022743179s Apr 14 16:05:59.903: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025258242s Apr 14 16:06:01.906: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028131932s Apr 14 16:06:03.911: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.033091681s Apr 14 16:06:03.911: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:03.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4573" for this suite. • [SLOW TEST:18.806 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":134,"failed":0} Apr 14 16:06:03.925: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:43.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Apr 14 16:05:45.265: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:45.274: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-7940 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:115 STEP: creating secret and pod Apr 14 16:05:45.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-7940' Apr 14 16:05:45.714: INFO: stderr: "" Apr 14 16:05:45.714: INFO: stdout: "secret/test-secret created\n" Apr 14 16:05:45.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-7940' Apr 14 16:05:45.951: INFO: stderr: "" Apr 14 16:05:45.951: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Apr 14 16:06:03.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs secret-test-pod test-container --namespace=examples-7940' Apr 14 16:06:04.122: INFO: stderr: "" Apr 14 16:06:04.122: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:04.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7940" for this suite. • [SLOW TEST:21.100 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:115 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":493,"failed":0} Apr 14 16:06:04.132: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 14 16:05:44.615: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:44.623: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4168 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:04.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4168" for this suite. • [SLOW TEST:21.936 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":360,"failed":0} Apr 14 16:06:04.754: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4134 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:376 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:07.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4134" for this suite. • [SLOW TEST:19.212 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:376 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":257,"failed":0} Apr 14 16:06:07.926: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:45.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6404 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Apr 14 16:05:45.980: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e" in namespace "security-context-test-6404" to be "Succeeded or Failed" Apr 14 16:05:45.983: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500666ms Apr 14 16:05:47.986: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005363796s Apr 14 16:05:49.989: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00914147s Apr 14 16:05:51.994: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0141402s Apr 14 16:05:53.998: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017618317s Apr 14 16:05:56.000: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020273586s Apr 14 16:05:58.003: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022547974s Apr 14 16:06:00.006: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025948518s Apr 14 16:06:02.009: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028902186s Apr 14 16:06:04.012: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.031490457s Apr 14 16:06:06.017: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.036437921s Apr 14 16:06:08.020: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.040042928s Apr 14 16:06:08.020: INFO: Pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e" satisfied condition "Succeeded or Failed" Apr 14 16:06:08.026: INFO: Got logs for pod "busybox-privileged-true-6408c95c-11e2-4b63-909f-2d1d0b6de13e": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:08.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6404" for this suite. • [SLOW TEST:22.478 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":169,"failed":0} Apr 14 16:06:08.036: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 14 16:05:43.966: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.975: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3516 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:790 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:10.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3516" for this suite. • [SLOW TEST:27.503 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:790 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":238,"failed":0} Apr 14 16:06:10.141: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7846 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-639a4fb4-d4e6-4b0e-bfa8-441aea79f949 in namespace container-probe-7846 Apr 14 16:06:02.685: INFO: Started pod liveness-639a4fb4-d4e6-4b0e-bfa8-441aea79f949 in namespace container-probe-7846 STEP: checking the pod's current state and verifying that restartCount is present Apr 14 16:06:02.689: INFO: Initial restart count of pod liveness-639a4fb4-d4e6-4b0e-bfa8-441aea79f949 is 0 Apr 14 16:06:20.718: INFO: Restart count of pod container-probe-7846/liveness-639a4fb4-d4e6-4b0e-bfa8-441aea79f949 is now 1 (18.029786651s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:06:20.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7846" for this suite. • [SLOW TEST:37.933 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":2,"skipped":260,"failed":0} Apr 14 16:06:20.736: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Apr 14 16:05:43.215: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.224: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-6931 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Apr 14 16:05:43.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-6931' Apr 14 16:05:43.707: INFO: stderr: "" Apr 14 16:05:43.707: INFO: stdout: "pod/liveness-exec created\n" Apr 14 16:05:43.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-6931' Apr 14 16:05:43.949: INFO: stderr: "" Apr 14 16:05:43.949: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Apr 14 16:05:55.958: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:05:57.961: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:05:59.957: INFO: Pod: liveness-http, restart count:0 Apr 14 16:05:59.963: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:01.959: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:01.966: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:03.963: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:03.969: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:05.967: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:05.972: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:07.969: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:07.975: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:09.974: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:09.978: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:11.979: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:11.981: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:13.983: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:13.984: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:15.987: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:15.989: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:17.993: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:17.993: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:19.996: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:19.997: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:22.000: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:22.000: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:24.005: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:24.005: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:26.008: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:26.010: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:28.013: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:28.013: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:30.017: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:30.018: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:32.020: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:32.021: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:34.023: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:34.023: INFO: Pod: liveness-http, restart count:0 Apr 14 16:06:36.026: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:36.026: INFO: Pod: liveness-http, restart count:1 Apr 14 16:06:36.027: INFO: Saw liveness-http restart, succeeded... Apr 14 16:06:38.029: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:40.033: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:42.037: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:44.039: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:46.043: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:48.047: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:50.052: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:52.056: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:54.058: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:56.061: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:06:58.067: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:00.071: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:02.075: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:04.080: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:06.084: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:08.087: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:10.091: INFO: Pod: liveness-exec, restart count:0 Apr 14 16:07:12.094: INFO: Pod: liveness-exec, restart count:1 Apr 14 16:07:12.094: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:07:12.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6931" for this suite. • [SLOW TEST:89.722 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":104,"failed":0} Apr 14 16:07:12.104: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:43.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2387 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-40f511ad-102a-40e7-affd-e48616484f93 in namespace container-probe-2387 Apr 14 16:06:03.490: INFO: Started pod liveness-40f511ad-102a-40e7-affd-e48616484f93 in namespace container-probe-2387 STEP: checking the pod's current state and verifying that restartCount is present Apr 14 16:06:03.493: INFO: Initial restart count of pod liveness-40f511ad-102a-40e7-affd-e48616484f93 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:10:03.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2387" for this suite. • [SLOW TEST:260.722 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":116,"failed":0} Apr 14 16:10:03.930: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 14 16:05:43.865: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:05:43.873: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8071 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:694 STEP: getting restart delay-0 Apr 14 16:07:10.047: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-04-14 16:06:37 +0000 UTC restartedAt=2021-04-14 16:07:09 +0000 UTC (32s) STEP: getting restart delay-1 Apr 14 16:08:11.283: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-04-14 16:07:14 +0000 UTC restartedAt=2021-04-14 16:08:10 +0000 UTC (56s) STEP: getting restart delay-2 Apr 14 16:09:49.644: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-04-14 16:08:15 +0000 UTC restartedAt=2021-04-14 16:09:47 +0000 UTC (1m32s) STEP: updating the image Apr 14 16:09:50.152: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Apr 14 16:10:17.227: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-04-14 16:10:00 +0000 UTC restartedAt=2021-04-14 16:10:15 +0000 UTC (15s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:10:17.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8071" for this suite. • [SLOW TEST:274.602 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:694 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":248,"failed":0} Apr 14 16:10:17.238: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-8501 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Apr 14 16:05:45.272: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Apr 14 16:05:46.283: INFO: node status heartbeat is unchanged for 1.004291601s, waiting for 1m20s Apr 14 16:05:47.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:05:47.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:05:48.284: INFO: node status heartbeat is unchanged for 1.002375447s, waiting for 1m20s Apr 14 16:05:49.282: INFO: node status heartbeat is unchanged for 2.000368302s, waiting for 1m20s Apr 14 16:05:50.284: INFO: node status heartbeat is unchanged for 3.001542812s, waiting for 1m20s Apr 14 16:05:51.285: INFO: node status heartbeat is unchanged for 4.002587589s, waiting for 1m20s Apr 14 16:05:52.284: INFO: node status heartbeat is unchanged for 5.002430618s, waiting for 1m20s Apr 14 16:05:53.283: INFO: node status heartbeat is unchanged for 6.001468602s, waiting for 1m20s Apr 14 16:05:54.282: INFO: node status heartbeat is unchanged for 7.000167008s, waiting for 1m20s Apr 14 16:05:55.283: INFO: node status heartbeat is unchanged for 8.000762434s, waiting for 1m20s Apr 14 16:05:56.283: INFO: node status heartbeat is unchanged for 9.00101015s, waiting for 1m20s Apr 14 16:05:57.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:05:57.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "3bf72f90d0d14cb2bb79e60bcb52e158", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "2213d318-375c-47a6-9ac1-36d4d507d552", KernelVersion: "3.10.0-1160.24.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.12", KubeletVersion: "v1.18.8", KubeProxyVersion: "v1.18.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 22 identical elements {Names: []string{"prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654"}, SizeBytes: 17463681}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, + { + Names: []string{ + "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", + "busybox:1.29", + }, + SizeBytes: 1154361, + }, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f", "k8s.gcr.io/pause:3.2"}, SizeBytes: 682696}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Apr 14 16:05:58.282: INFO: node status heartbeat is unchanged for 999.647113ms, waiting for 1m20s Apr 14 16:05:59.288: INFO: node status heartbeat is unchanged for 2.004958672s, waiting for 1m20s Apr 14 16:06:00.284: INFO: node status heartbeat is unchanged for 3.001121241s, waiting for 1m20s Apr 14 16:06:01.285: INFO: node status heartbeat is unchanged for 4.002095455s, waiting for 1m20s Apr 14 16:06:02.285: INFO: node status heartbeat is unchanged for 5.002252341s, waiting for 1m20s Apr 14 16:06:03.284: INFO: node status heartbeat is unchanged for 6.001218044s, waiting for 1m20s Apr 14 16:06:04.284: INFO: node status heartbeat is unchanged for 7.000911966s, waiting for 1m20s Apr 14 16:06:05.283: INFO: node status heartbeat is unchanged for 8.000101815s, waiting for 1m20s Apr 14 16:06:06.282: INFO: node status heartbeat is unchanged for 8.999240517s, waiting for 1m20s Apr 14 16:06:07.282: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 14 16:06:07.284: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:05:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:06:08.282: INFO: node status heartbeat is unchanged for 1.000298946s, waiting for 1m20s Apr 14 16:06:09.282: INFO: node status heartbeat is unchanged for 1.999926039s, waiting for 1m20s Apr 14 16:06:10.283: INFO: node status heartbeat is unchanged for 3.001049775s, waiting for 1m20s Apr 14 16:06:11.282: INFO: node status heartbeat is unchanged for 4.000273897s, waiting for 1m20s Apr 14 16:06:12.282: INFO: node status heartbeat is unchanged for 4.999963539s, waiting for 1m20s Apr 14 16:06:13.285: INFO: node status heartbeat is unchanged for 6.003243281s, waiting for 1m20s Apr 14 16:06:14.282: INFO: node status heartbeat is unchanged for 7.000229265s, waiting for 1m20s Apr 14 16:06:15.283: INFO: node status heartbeat is unchanged for 8.00072129s, waiting for 1m20s Apr 14 16:06:16.283: INFO: node status heartbeat is unchanged for 9.000693516s, waiting for 1m20s Apr 14 16:06:17.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:06:17.284: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:06:18.282: INFO: node status heartbeat is unchanged for 1.000727906s, waiting for 1m20s Apr 14 16:06:19.283: INFO: node status heartbeat is unchanged for 2.000911314s, waiting for 1m20s Apr 14 16:06:20.283: INFO: node status heartbeat is unchanged for 3.00116274s, waiting for 1m20s Apr 14 16:06:21.282: INFO: node status heartbeat is unchanged for 4.000771927s, waiting for 1m20s Apr 14 16:06:22.282: INFO: node status heartbeat is unchanged for 5.000145531s, waiting for 1m20s Apr 14 16:06:23.285: INFO: node status heartbeat is unchanged for 6.003210913s, waiting for 1m20s Apr 14 16:06:24.282: INFO: node status heartbeat is unchanged for 7.000745482s, waiting for 1m20s Apr 14 16:06:25.282: INFO: node status heartbeat is unchanged for 8.00057445s, waiting for 1m20s Apr 14 16:06:26.283: INFO: node status heartbeat is unchanged for 9.001460199s, waiting for 1m20s Apr 14 16:06:27.282: INFO: node status heartbeat is unchanged for 10.000751315s, waiting for 1m20s Apr 14 16:06:28.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:06:28.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "3bf72f90d0d14cb2bb79e60bcb52e158", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "2213d318-375c-47a6-9ac1-36d4d507d552", KernelVersion: "3.10.0-1160.24.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.12", KubeletVersion: "v1.18.8", KubeProxyVersion: "v1.18.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 14 identical elements {Names: []string{"lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a", "lachlanevenson/k8s-helm:v3.2.3"}, SizeBytes: 46479395}, {Names: []string{"localhost:30500/sriov-device-plugin@sha256:0ed4596bcd9f2a115db336e97ceb241880198edb0b31804244065230d589c0c0", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44391453}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", + "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", + }, + SizeBytes: 42321438, + }, {Names: []string{"quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd", "quay.io/coreos/kube-rbac-proxy:v0.4.1"}, SizeBytes: 41317870}, {Names: []string{"quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7", "quay.io/coreos/prometheus-operator:v0.40.0"}, SizeBytes: 38238457}, ... // 4 identical elements {Names: []string{"prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654"}, SizeBytes: 17463681}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", + "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", + }, + SizeBytes: 6757579, + }, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0", + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2", + "gcr.io/kubernetes-e2e-test-images/mounttest:1.0", + }, + SizeBytes: 1563521, + }, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, ... // 2 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Apr 14 16:06:29.282: INFO: node status heartbeat is unchanged for 999.800371ms, waiting for 1m20s Apr 14 16:06:30.283: INFO: node status heartbeat is unchanged for 2.000014372s, waiting for 1m20s Apr 14 16:06:31.285: INFO: node status heartbeat is unchanged for 3.002264549s, waiting for 1m20s Apr 14 16:06:32.283: INFO: node status heartbeat is unchanged for 3.999990989s, waiting for 1m20s Apr 14 16:06:33.286: INFO: node status heartbeat is unchanged for 5.002988053s, waiting for 1m20s Apr 14 16:06:34.285: INFO: node status heartbeat is unchanged for 6.002078699s, waiting for 1m20s Apr 14 16:06:35.283: INFO: node status heartbeat is unchanged for 7.000680912s, waiting for 1m20s Apr 14 16:06:36.285: INFO: node status heartbeat is unchanged for 8.001972378s, waiting for 1m20s Apr 14 16:06:37.285: INFO: node status heartbeat is unchanged for 9.002318721s, waiting for 1m20s Apr 14 16:06:38.284: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:06:38.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:06:39.282: INFO: node status heartbeat is unchanged for 997.410706ms, waiting for 1m20s Apr 14 16:06:40.283: INFO: node status heartbeat is unchanged for 1.998691342s, waiting for 1m20s Apr 14 16:06:41.282: INFO: node status heartbeat is unchanged for 2.998329766s, waiting for 1m20s Apr 14 16:06:42.283: INFO: node status heartbeat is unchanged for 3.999402565s, waiting for 1m20s Apr 14 16:06:43.283: INFO: node status heartbeat is unchanged for 4.998423391s, waiting for 1m20s Apr 14 16:06:44.283: INFO: node status heartbeat is unchanged for 5.99868522s, waiting for 1m20s Apr 14 16:06:45.283: INFO: node status heartbeat is unchanged for 6.998585321s, waiting for 1m20s Apr 14 16:06:46.283: INFO: node status heartbeat is unchanged for 7.998733973s, waiting for 1m20s Apr 14 16:06:47.282: INFO: node status heartbeat is unchanged for 8.99766962s, waiting for 1m20s Apr 14 16:06:48.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:06:48.284: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:06:49.283: INFO: node status heartbeat is unchanged for 1.001114132s, waiting for 1m20s Apr 14 16:06:50.282: INFO: node status heartbeat is unchanged for 2.000171621s, waiting for 1m20s Apr 14 16:06:51.283: INFO: node status heartbeat is unchanged for 3.000722828s, waiting for 1m20s Apr 14 16:06:52.282: INFO: node status heartbeat is unchanged for 4.000520106s, waiting for 1m20s Apr 14 16:06:53.282: INFO: node status heartbeat is unchanged for 4.999521952s, waiting for 1m20s Apr 14 16:06:54.282: INFO: node status heartbeat is unchanged for 5.999752912s, waiting for 1m20s Apr 14 16:06:55.283: INFO: node status heartbeat is unchanged for 7.000697247s, waiting for 1m20s Apr 14 16:06:56.282: INFO: node status heartbeat is unchanged for 8.000198672s, waiting for 1m20s Apr 14 16:06:57.284: INFO: node status heartbeat is unchanged for 9.001594836s, waiting for 1m20s Apr 14 16:06:58.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:06:58.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:06:59.285: INFO: node status heartbeat is unchanged for 999.774526ms, waiting for 1m20s Apr 14 16:07:00.283: INFO: node status heartbeat is unchanged for 1.997726059s, waiting for 1m20s Apr 14 16:07:01.283: INFO: node status heartbeat is unchanged for 2.997550181s, waiting for 1m20s Apr 14 16:07:02.282: INFO: node status heartbeat is unchanged for 3.996597877s, waiting for 1m20s Apr 14 16:07:03.283: INFO: node status heartbeat is unchanged for 4.997875931s, waiting for 1m20s Apr 14 16:07:04.282: INFO: node status heartbeat is unchanged for 5.996668711s, waiting for 1m20s Apr 14 16:07:05.282: INFO: node status heartbeat is unchanged for 6.997009782s, waiting for 1m20s Apr 14 16:07:06.284: INFO: node status heartbeat is unchanged for 7.998814157s, waiting for 1m20s Apr 14 16:07:07.282: INFO: node status heartbeat is unchanged for 8.997267681s, waiting for 1m20s Apr 14 16:07:08.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:07:08.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:06:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:09.282: INFO: node status heartbeat is unchanged for 997.463871ms, waiting for 1m20s Apr 14 16:07:10.288: INFO: node status heartbeat is unchanged for 2.003219926s, waiting for 1m20s Apr 14 16:07:11.282: INFO: node status heartbeat is unchanged for 2.997532991s, waiting for 1m20s Apr 14 16:07:12.283: INFO: node status heartbeat is unchanged for 3.998493754s, waiting for 1m20s Apr 14 16:07:13.283: INFO: node status heartbeat is unchanged for 4.998055828s, waiting for 1m20s Apr 14 16:07:14.283: INFO: node status heartbeat is unchanged for 5.998687328s, waiting for 1m20s Apr 14 16:07:15.284: INFO: node status heartbeat is unchanged for 6.998885673s, waiting for 1m20s Apr 14 16:07:16.285: INFO: node status heartbeat is unchanged for 8.000184583s, waiting for 1m20s Apr 14 16:07:17.282: INFO: node status heartbeat is unchanged for 8.997624993s, waiting for 1m20s Apr 14 16:07:18.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:07:18.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:19.285: INFO: node status heartbeat is unchanged for 1.00228032s, waiting for 1m20s Apr 14 16:07:20.283: INFO: node status heartbeat is unchanged for 2.000490561s, waiting for 1m20s Apr 14 16:07:21.283: INFO: node status heartbeat is unchanged for 3.000944805s, waiting for 1m20s Apr 14 16:07:22.282: INFO: node status heartbeat is unchanged for 3.99959197s, waiting for 1m20s Apr 14 16:07:23.283: INFO: node status heartbeat is unchanged for 5.000178547s, waiting for 1m20s Apr 14 16:07:24.282: INFO: node status heartbeat is unchanged for 6.000057537s, waiting for 1m20s Apr 14 16:07:25.283: INFO: node status heartbeat is unchanged for 7.000526506s, waiting for 1m20s Apr 14 16:07:26.282: INFO: node status heartbeat is unchanged for 7.999867682s, waiting for 1m20s Apr 14 16:07:27.282: INFO: node status heartbeat is unchanged for 8.999327151s, waiting for 1m20s Apr 14 16:07:28.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:07:28.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:29.282: INFO: node status heartbeat is unchanged for 999.700426ms, waiting for 1m20s Apr 14 16:07:30.284: INFO: node status heartbeat is unchanged for 2.001534756s, waiting for 1m20s Apr 14 16:07:31.283: INFO: node status heartbeat is unchanged for 3.000503791s, waiting for 1m20s Apr 14 16:07:32.285: INFO: node status heartbeat is unchanged for 4.002883839s, waiting for 1m20s Apr 14 16:07:33.284: INFO: node status heartbeat is unchanged for 5.002298688s, waiting for 1m20s Apr 14 16:07:34.283: INFO: node status heartbeat is unchanged for 6.000865468s, waiting for 1m20s Apr 14 16:07:35.283: INFO: node status heartbeat is unchanged for 7.001103944s, waiting for 1m20s Apr 14 16:07:36.284: INFO: node status heartbeat is unchanged for 8.001586497s, waiting for 1m20s Apr 14 16:07:37.283: INFO: node status heartbeat is unchanged for 9.00084557s, waiting for 1m20s Apr 14 16:07:38.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:07:38.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:39.282: INFO: node status heartbeat is unchanged for 999.570563ms, waiting for 1m20s Apr 14 16:07:40.282: INFO: node status heartbeat is unchanged for 1.999839386s, waiting for 1m20s Apr 14 16:07:41.283: INFO: node status heartbeat is unchanged for 3.000251003s, waiting for 1m20s Apr 14 16:07:42.282: INFO: node status heartbeat is unchanged for 3.999316672s, waiting for 1m20s Apr 14 16:07:43.284: INFO: node status heartbeat is unchanged for 5.000975863s, waiting for 1m20s Apr 14 16:07:44.282: INFO: node status heartbeat is unchanged for 5.999482854s, waiting for 1m20s Apr 14 16:07:45.281: INFO: node status heartbeat is unchanged for 6.998855857s, waiting for 1m20s Apr 14 16:07:46.284: INFO: node status heartbeat is unchanged for 8.001841477s, waiting for 1m20s Apr 14 16:07:47.283: INFO: node status heartbeat is unchanged for 9.000086007s, waiting for 1m20s Apr 14 16:07:48.286: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 14 16:07:48.288: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:49.285: INFO: node status heartbeat is unchanged for 998.79489ms, waiting for 1m20s Apr 14 16:07:50.282: INFO: node status heartbeat is unchanged for 1.996226521s, waiting for 1m20s Apr 14 16:07:51.285: INFO: node status heartbeat is unchanged for 2.999482305s, waiting for 1m20s Apr 14 16:07:52.285: INFO: node status heartbeat is unchanged for 3.999601672s, waiting for 1m20s Apr 14 16:07:53.283: INFO: node status heartbeat is unchanged for 4.996801679s, waiting for 1m20s Apr 14 16:07:54.284: INFO: node status heartbeat is unchanged for 5.997769195s, waiting for 1m20s Apr 14 16:07:55.282: INFO: node status heartbeat is unchanged for 6.99670703s, waiting for 1m20s Apr 14 16:07:56.282: INFO: node status heartbeat is unchanged for 7.996410876s, waiting for 1m20s Apr 14 16:07:57.282: INFO: node status heartbeat is unchanged for 8.996688322s, waiting for 1m20s Apr 14 16:07:58.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:07:58.284: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:07:59.282: INFO: node status heartbeat is unchanged for 1.000361943s, waiting for 1m20s Apr 14 16:08:00.283: INFO: node status heartbeat is unchanged for 2.000860272s, waiting for 1m20s Apr 14 16:08:01.282: INFO: node status heartbeat is unchanged for 3.000549952s, waiting for 1m20s Apr 14 16:08:02.282: INFO: node status heartbeat is unchanged for 4.000573342s, waiting for 1m20s Apr 14 16:08:03.283: INFO: node status heartbeat is unchanged for 5.000832186s, waiting for 1m20s Apr 14 16:08:04.282: INFO: node status heartbeat is unchanged for 5.999870031s, waiting for 1m20s Apr 14 16:08:05.283: INFO: node status heartbeat is unchanged for 7.001000177s, waiting for 1m20s Apr 14 16:08:06.283: INFO: node status heartbeat is unchanged for 8.000711193s, waiting for 1m20s Apr 14 16:08:07.283: INFO: node status heartbeat is unchanged for 9.001328319s, waiting for 1m20s Apr 14 16:08:08.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:08.284: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:07:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:08:09.285: INFO: node status heartbeat is unchanged for 1.002843437s, waiting for 1m20s Apr 14 16:08:10.282: INFO: node status heartbeat is unchanged for 2.000024724s, waiting for 1m20s Apr 14 16:08:11.284: INFO: node status heartbeat is unchanged for 3.001929952s, waiting for 1m20s Apr 14 16:08:12.283: INFO: node status heartbeat is unchanged for 4.001542589s, waiting for 1m20s Apr 14 16:08:13.284: INFO: node status heartbeat is unchanged for 5.001722022s, waiting for 1m20s Apr 14 16:08:14.285: INFO: node status heartbeat is unchanged for 6.002698214s, waiting for 1m20s Apr 14 16:08:15.283: INFO: node status heartbeat is unchanged for 7.000666201s, waiting for 1m20s Apr 14 16:08:16.284: INFO: node status heartbeat is unchanged for 8.002070008s, waiting for 1m20s Apr 14 16:08:17.285: INFO: node status heartbeat is unchanged for 9.002969001s, waiting for 1m20s Apr 14 16:08:18.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:18.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:08:19.284: INFO: node status heartbeat is unchanged for 1.001075766s, waiting for 1m20s Apr 14 16:08:20.282: INFO: node status heartbeat is unchanged for 1.999247207s, waiting for 1m20s Apr 14 16:08:21.283: INFO: node status heartbeat is unchanged for 2.999802059s, waiting for 1m20s Apr 14 16:08:22.283: INFO: node status heartbeat is unchanged for 3.999799729s, waiting for 1m20s Apr 14 16:08:23.282: INFO: node status heartbeat is unchanged for 4.998902005s, waiting for 1m20s Apr 14 16:08:24.282: INFO: node status heartbeat is unchanged for 5.999171548s, waiting for 1m20s Apr 14 16:08:25.283: INFO: node status heartbeat is unchanged for 6.999844593s, waiting for 1m20s Apr 14 16:08:26.283: INFO: node status heartbeat is unchanged for 8.000182663s, waiting for 1m20s Apr 14 16:08:27.283: INFO: node status heartbeat is unchanged for 8.999668284s, waiting for 1m20s Apr 14 16:08:28.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:28.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:08:29.283: INFO: node status heartbeat is unchanged for 1.000601102s, waiting for 1m20s Apr 14 16:08:30.283: INFO: node status heartbeat is unchanged for 2.000365207s, waiting for 1m20s Apr 14 16:08:31.282: INFO: node status heartbeat is unchanged for 2.999855204s, waiting for 1m20s Apr 14 16:08:32.282: INFO: node status heartbeat is unchanged for 4.00029373s, waiting for 1m20s Apr 14 16:08:33.283: INFO: node status heartbeat is unchanged for 5.000421814s, waiting for 1m20s Apr 14 16:08:34.285: INFO: node status heartbeat is unchanged for 6.002503875s, waiting for 1m20s Apr 14 16:08:35.282: INFO: node status heartbeat is unchanged for 7.000190824s, waiting for 1m20s Apr 14 16:08:36.283: INFO: node status heartbeat is unchanged for 8.000871761s, waiting for 1m20s Apr 14 16:08:37.284: INFO: node status heartbeat is unchanged for 9.001799988s, waiting for 1m20s Apr 14 16:08:38.284: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:38.286: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:08:39.282: INFO: node status heartbeat is unchanged for 998.560978ms, waiting for 1m20s Apr 14 16:08:40.283: INFO: node status heartbeat is unchanged for 1.999226909s, waiting for 1m20s Apr 14 16:08:41.285: INFO: node status heartbeat is unchanged for 3.001348613s, waiting for 1m20s Apr 14 16:08:42.283: INFO: node status heartbeat is unchanged for 3.999302197s, waiting for 1m20s Apr 14 16:08:43.283: INFO: node status heartbeat is unchanged for 4.999038381s, waiting for 1m20s Apr 14 16:08:44.284: INFO: node status heartbeat is unchanged for 6.000319468s, waiting for 1m20s Apr 14 16:08:45.282: INFO: node status heartbeat is unchanged for 6.998476592s, waiting for 1m20s Apr 14 16:08:46.285: INFO: node status heartbeat is unchanged for 8.000847929s, waiting for 1m20s Apr 14 16:08:47.285: INFO: node status heartbeat is unchanged for 9.001721517s, waiting for 1m20s Apr 14 16:08:48.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:48.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:08:49.283: INFO: node status heartbeat is unchanged for 1.000837887s, waiting for 1m20s Apr 14 16:08:50.283: INFO: node status heartbeat is unchanged for 2.001286798s, waiting for 1m20s Apr 14 16:08:51.285: INFO: node status heartbeat is unchanged for 3.00297125s, waiting for 1m20s Apr 14 16:08:52.283: INFO: node status heartbeat is unchanged for 4.000940185s, waiting for 1m20s Apr 14 16:08:53.285: INFO: node status heartbeat is unchanged for 5.002555281s, waiting for 1m20s Apr 14 16:08:54.283: INFO: node status heartbeat is unchanged for 6.000533095s, waiting for 1m20s Apr 14 16:08:55.283: INFO: node status heartbeat is unchanged for 7.0004225s, waiting for 1m20s Apr 14 16:08:56.284: INFO: node status heartbeat is unchanged for 8.002077224s, waiting for 1m20s Apr 14 16:08:57.285: INFO: node status heartbeat is unchanged for 9.002324868s, waiting for 1m20s Apr 14 16:08:58.284: INFO: node status heartbeat is unchanged for 10.001506098s, waiting for 1m20s Apr 14 16:08:59.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:08:59.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:00.283: INFO: node status heartbeat is unchanged for 1.000409016s, waiting for 1m20s Apr 14 16:09:01.282: INFO: node status heartbeat is unchanged for 1.999062358s, waiting for 1m20s Apr 14 16:09:02.283: INFO: node status heartbeat is unchanged for 2.999737344s, waiting for 1m20s Apr 14 16:09:03.282: INFO: node status heartbeat is unchanged for 3.99955403s, waiting for 1m20s Apr 14 16:09:04.286: INFO: node status heartbeat is unchanged for 5.003218212s, waiting for 1m20s Apr 14 16:09:05.284: INFO: node status heartbeat is unchanged for 6.00091915s, waiting for 1m20s Apr 14 16:09:06.284: INFO: node status heartbeat is unchanged for 7.000821396s, waiting for 1m20s Apr 14 16:09:07.285: INFO: node status heartbeat is unchanged for 8.00187468s, waiting for 1m20s Apr 14 16:09:08.285: INFO: node status heartbeat is unchanged for 9.001825732s, waiting for 1m20s Apr 14 16:09:09.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:09.286: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:08:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:10.282: INFO: node status heartbeat is unchanged for 999.508796ms, waiting for 1m20s Apr 14 16:09:11.283: INFO: node status heartbeat is unchanged for 1.999846265s, waiting for 1m20s Apr 14 16:09:12.284: INFO: node status heartbeat is unchanged for 3.001284064s, waiting for 1m20s Apr 14 16:09:13.285: INFO: node status heartbeat is unchanged for 4.002700834s, waiting for 1m20s Apr 14 16:09:14.284: INFO: node status heartbeat is unchanged for 5.00154694s, waiting for 1m20s Apr 14 16:09:15.284: INFO: node status heartbeat is unchanged for 6.000812234s, waiting for 1m20s Apr 14 16:09:16.284: INFO: node status heartbeat is unchanged for 7.001732249s, waiting for 1m20s Apr 14 16:09:17.285: INFO: node status heartbeat is unchanged for 8.0022498s, waiting for 1m20s Apr 14 16:09:18.284: INFO: node status heartbeat is unchanged for 9.001557298s, waiting for 1m20s Apr 14 16:09:19.286: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:19.288: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:20.283: INFO: node status heartbeat is unchanged for 997.135791ms, waiting for 1m20s Apr 14 16:09:21.286: INFO: node status heartbeat is unchanged for 2.000232545s, waiting for 1m20s Apr 14 16:09:22.284: INFO: node status heartbeat is unchanged for 2.998271812s, waiting for 1m20s Apr 14 16:09:23.283: INFO: node status heartbeat is unchanged for 3.99776599s, waiting for 1m20s Apr 14 16:09:24.283: INFO: node status heartbeat is unchanged for 4.997143518s, waiting for 1m20s Apr 14 16:09:25.283: INFO: node status heartbeat is unchanged for 5.997258506s, waiting for 1m20s Apr 14 16:09:26.282: INFO: node status heartbeat is unchanged for 6.996510282s, waiting for 1m20s Apr 14 16:09:27.282: INFO: node status heartbeat is unchanged for 7.996638702s, waiting for 1m20s Apr 14 16:09:28.282: INFO: node status heartbeat is unchanged for 8.996811451s, waiting for 1m20s Apr 14 16:09:29.282: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:29.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:30.282: INFO: node status heartbeat is unchanged for 999.633979ms, waiting for 1m20s Apr 14 16:09:31.285: INFO: node status heartbeat is unchanged for 2.002743319s, waiting for 1m20s Apr 14 16:09:32.283: INFO: node status heartbeat is unchanged for 3.001251618s, waiting for 1m20s Apr 14 16:09:33.283: INFO: node status heartbeat is unchanged for 4.000713414s, waiting for 1m20s Apr 14 16:09:34.282: INFO: node status heartbeat is unchanged for 4.999517607s, waiting for 1m20s Apr 14 16:09:35.283: INFO: node status heartbeat is unchanged for 6.000781504s, waiting for 1m20s Apr 14 16:09:36.284: INFO: node status heartbeat is unchanged for 7.001586389s, waiting for 1m20s Apr 14 16:09:37.282: INFO: node status heartbeat is unchanged for 8.00032973s, waiting for 1m20s Apr 14 16:09:38.282: INFO: node status heartbeat is unchanged for 8.999877509s, waiting for 1m20s Apr 14 16:09:39.283: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:39.285: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:40.283: INFO: node status heartbeat is unchanged for 1.000321117s, waiting for 1m20s Apr 14 16:09:41.282: INFO: node status heartbeat is unchanged for 1.999676112s, waiting for 1m20s Apr 14 16:09:42.285: INFO: node status heartbeat is unchanged for 3.001860876s, waiting for 1m20s Apr 14 16:09:43.285: INFO: node status heartbeat is unchanged for 4.001908239s, waiting for 1m20s Apr 14 16:09:44.286: INFO: node status heartbeat is unchanged for 5.003197207s, waiting for 1m20s Apr 14 16:09:45.283: INFO: node status heartbeat is unchanged for 6.000402179s, waiting for 1m20s Apr 14 16:09:46.284: INFO: node status heartbeat is unchanged for 7.001103645s, waiting for 1m20s Apr 14 16:09:47.285: INFO: node status heartbeat is unchanged for 8.001994701s, waiting for 1m20s Apr 14 16:09:48.285: INFO: node status heartbeat is unchanged for 9.001804906s, waiting for 1m20s Apr 14 16:09:49.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:49.288: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:38 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:09:50.283: INFO: node status heartbeat is unchanged for 997.389221ms, waiting for 1m20s Apr 14 16:09:51.286: INFO: node status heartbeat is unchanged for 2.000603706s, waiting for 1m20s Apr 14 16:09:52.284: INFO: node status heartbeat is unchanged for 2.998770435s, waiting for 1m20s Apr 14 16:09:53.283: INFO: node status heartbeat is unchanged for 3.998219832s, waiting for 1m20s Apr 14 16:09:54.285: INFO: node status heartbeat is unchanged for 4.999639078s, waiting for 1m20s Apr 14 16:09:55.284: INFO: node status heartbeat is unchanged for 5.998459792s, waiting for 1m20s Apr 14 16:09:56.285: INFO: node status heartbeat is unchanged for 6.999769703s, waiting for 1m20s Apr 14 16:09:57.284: INFO: node status heartbeat is unchanged for 7.999273394s, waiting for 1m20s Apr 14 16:09:58.285: INFO: node status heartbeat is unchanged for 8.999629026s, waiting for 1m20s Apr 14 16:09:59.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:09:59.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:48 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:10:00.283: INFO: node status heartbeat is unchanged for 998.658936ms, waiting for 1m20s Apr 14 16:10:01.284: INFO: node status heartbeat is unchanged for 1.999881055s, waiting for 1m20s Apr 14 16:10:02.285: INFO: node status heartbeat is unchanged for 3.000056613s, waiting for 1m20s Apr 14 16:10:03.284: INFO: node status heartbeat is unchanged for 3.999546919s, waiting for 1m20s Apr 14 16:10:04.286: INFO: node status heartbeat is unchanged for 5.001164065s, waiting for 1m20s Apr 14 16:10:05.283: INFO: node status heartbeat is unchanged for 5.998058873s, waiting for 1m20s Apr 14 16:10:06.285: INFO: node status heartbeat is unchanged for 6.999991299s, waiting for 1m20s Apr 14 16:10:07.284: INFO: node status heartbeat is unchanged for 7.99968626s, waiting for 1m20s Apr 14 16:10:08.285: INFO: node status heartbeat is unchanged for 8.999975517s, waiting for 1m20s Apr 14 16:10:09.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:10:09.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:09:58 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:10:10.283: INFO: node status heartbeat is unchanged for 998.756274ms, waiting for 1m20s Apr 14 16:10:11.285: INFO: node status heartbeat is unchanged for 1.999870035s, waiting for 1m20s Apr 14 16:10:12.284: INFO: node status heartbeat is unchanged for 2.999322531s, waiting for 1m20s Apr 14 16:10:13.284: INFO: node status heartbeat is unchanged for 3.99908265s, waiting for 1m20s Apr 14 16:10:14.284: INFO: node status heartbeat is unchanged for 4.99947593s, waiting for 1m20s Apr 14 16:10:15.284: INFO: node status heartbeat is unchanged for 5.999207544s, waiting for 1m20s Apr 14 16:10:16.285: INFO: node status heartbeat is unchanged for 6.999973063s, waiting for 1m20s Apr 14 16:10:17.283: INFO: node status heartbeat is unchanged for 7.998187251s, waiting for 1m20s Apr 14 16:10:18.284: INFO: node status heartbeat is unchanged for 8.999488367s, waiting for 1m20s Apr 14 16:10:19.285: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:10:19.288: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:08 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:10:20.283: INFO: node status heartbeat is unchanged for 997.482951ms, waiting for 1m20s Apr 14 16:10:21.285: INFO: node status heartbeat is unchanged for 1.999829861s, waiting for 1m20s Apr 14 16:10:22.283: INFO: node status heartbeat is unchanged for 2.998095252s, waiting for 1m20s Apr 14 16:10:23.283: INFO: node status heartbeat is unchanged for 3.998194321s, waiting for 1m20s Apr 14 16:10:24.285: INFO: node status heartbeat is unchanged for 4.999831424s, waiting for 1m20s Apr 14 16:10:25.283: INFO: node status heartbeat is unchanged for 5.997573527s, waiting for 1m20s Apr 14 16:10:26.285: INFO: node status heartbeat is unchanged for 6.999440361s, waiting for 1m20s Apr 14 16:10:27.285: INFO: node status heartbeat is unchanged for 8.000167153s, waiting for 1m20s Apr 14 16:10:28.285: INFO: node status heartbeat is unchanged for 9.000143561s, waiting for 1m20s Apr 14 16:10:29.286: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:10:29.288: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:18 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:10:30.283: INFO: node status heartbeat is unchanged for 997.223247ms, waiting for 1m20s Apr 14 16:10:31.285: INFO: node status heartbeat is unchanged for 1.999778332s, waiting for 1m20s Apr 14 16:10:32.284: INFO: node status heartbeat is unchanged for 2.998313284s, waiting for 1m20s Apr 14 16:10:33.286: INFO: node status heartbeat is unchanged for 4.000078738s, waiting for 1m20s Apr 14 16:10:34.284: INFO: node status heartbeat is unchanged for 4.998462292s, waiting for 1m20s Apr 14 16:10:35.283: INFO: node status heartbeat is unchanged for 5.997931663s, waiting for 1m20s Apr 14 16:10:36.285: INFO: node status heartbeat is unchanged for 6.99911644s, waiting for 1m20s Apr 14 16:10:37.285: INFO: node status heartbeat is unchanged for 7.999759077s, waiting for 1m20s Apr 14 16:10:38.283: INFO: node status heartbeat is unchanged for 8.997916548s, waiting for 1m20s Apr 14 16:10:39.284: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 14 16:10:39.287: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:24:52 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:28 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-14 16:10:38 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:01 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-14 15:22:52 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 14 16:10:40.283: INFO: node status heartbeat is unchanged for 998.484006ms, waiting for 1m20s Apr 14 16:10:41.286: INFO: node status heartbeat is unchanged for 2.001477234s, waiting for 1m20s Apr 14 16:10:42.284: INFO: node status heartbeat is unchanged for 2.999538882s, waiting for 1m20s Apr 14 16:10:43.285: INFO: node status heartbeat is unchanged for 4.000992099s, waiting for 1m20s Apr 14 16:10:44.284: INFO: node status heartbeat is unchanged for 4.999954965s, waiting for 1m20s Apr 14 16:10:45.283: INFO: node status heartbeat is unchanged for 5.998564274s, waiting for 1m20s Apr 14 16:10:45.286: INFO: node status heartbeat is unchanged for 6.001268348s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:10:45.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8501" for this suite. • [SLOW TEST:302.429 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":351,"failed":0} Apr 14 16:10:45.306: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:05:42.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8998 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:735 STEP: getting restart delay when capped Apr 14 16:17:40.222: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-04-14 16:12:29 +0000 UTC restartedAt=2021-04-14 16:17:39 +0000 UTC (5m10s) Apr 14 16:22:50.499: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-04-14 16:17:44 +0000 UTC restartedAt=2021-04-14 16:22:48 +0000 UTC (5m4s) Apr 14 16:27:56.747: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-04-14 16:22:53 +0000 UTC restartedAt=2021-04-14 16:27:55 +0000 UTC (5m2s) STEP: getting restart delay after a capped delay Apr 14 16:33:10.912: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-04-14 16:28:00 +0000 UTC restartedAt=2021-04-14 16:33:09 +0000 UTC (5m9s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:33:10.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8998" for this suite. • [SLOW TEST:1648.081 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:735 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":300,"failed":0} Apr 14 16:33:10.923: INFO: Running AfterSuite actions on all nodes Apr 14 16:05:53.135: INFO: Running AfterSuite actions on all nodes Apr 14 16:33:10.967: INFO: Running AfterSuite actions on node 1 Apr 14 16:33:10.967: INFO: Skipping dumping logs from cluster Ran 30 of 4994 Specs in 1649.372 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 4964 Skipped Ginkgo ran 1 suite in 27m30.871033767s Test Suite Passed