Running Suite: Kubernetes e2e suite =================================== Random Seed: 1617823837 - Will randomize all specs Will run 4994 specs Running in parallel across 25 nodes Apr 7 19:30:39.486: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.489: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 7 19:30:39.518: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 7 19:30:39.570: INFO: The status of Pod cmk-init-discover-node1-8b7dz is Succeeded, skipping waiting Apr 7 19:30:39.570: INFO: The status of Pod cmk-init-discover-node2-5ldpn is Succeeded, skipping waiting Apr 7 19:30:39.570: INFO: 40 / 44 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 7 19:30:39.570: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 7 19:30:39.570: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 7 19:30:39.586: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 7 19:30:39.586: INFO: e2e test version: v1.18.17 Apr 7 19:30:39.587: INFO: kube-apiserver version: v1.18.8 Apr 7 19:30:39.587: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.594: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.591: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.614: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.599: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.620: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.604: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.626: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.604: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.627: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.609: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.633: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.616: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.636: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.614: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.637: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.610: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.638: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.618: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.640: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.621: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.644: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.631: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.647: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.622: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.649: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.626: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.649: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.627: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.652: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.634: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.653: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.633: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.653: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.636: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.658: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.636: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.660: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.648: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.663: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.661: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.684: INFO: Cluster IP family: ipv4 Apr 7 19:30:39.675: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.694: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 7 19:30:39.684: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.706: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 7 19:30:39.683: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.706: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 7 19:30:39.700: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:30:39.722: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 7 19:30:39.761: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.769: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-5788 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:39.876: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:39.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5788" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:39.885685 42 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 215 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0026987d0, 0xc00065ce00, 0x7f04484ba6d0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0026988c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0029263a0, 0xc0026988c8, 0xc0029263a0, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0026988c8, 0x452108, 0xc0026988b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x82, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00256b590, 0x25, 0xc002d6b920, 0xc0029aec00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000d73440, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000d73440, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000010a98, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0026996c8, 0xc001558000, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001558000, 0x0, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001558000, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc002554000, 0xc001558000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc002554000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc002554000, 0xc00253e030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b42d0, 0x7f044576cb38, 0xc000c02a00, 0x495020e, 0x14, 0xc002fbb0b0, 0x3, 0x3, 0x529bcc0, 0xc0001d48c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc000c02a00, 0x495020e, 0x14, 0xc002664e00, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc000c02a00, 0x495020e, 0x14, 0xc0023fd3e0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c02a00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc000c02a00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc000c02a00, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.144 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 7 19:30:39.835: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.844: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-8724 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:39.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8724" for this suite. •SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 7 19:30:41.043: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:41.052: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-2859 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:41.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2859" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 7 19:30:41.445: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:41.454: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-9319 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:41.560: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:41.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9319" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:41.572916 53 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 186 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0028167d0, 0xc000642e00, 0x7f493ecc3008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0028168c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0033c6480, 0xc0028168c8, 0xc0033c6480, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0028168c8, 0x452108, 0xc0028168b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x73, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003391e60, 0x25, 0xc003002960, 0xc0030c1200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b209c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b209c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0006dc0a8, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0028176c8, 0xc0017300f0, 0x51d23a0, 0xc000170940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0017300f0, 0x0, 0x51d23a0, 0xc000170940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0017300f0, 0x51d23a0, 0xc000170940) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc002702000, 0xc0017300f0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc002702000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc002702000, 0xc0026ee030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001ac2d0, 0x7f493c057f38, 0xc0030b2500, 0x495020e, 0x14, 0xc002990e10, 0x3, 0x3, 0x529bcc0, 0xc000170940, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc0030b2500, 0x495020e, 0x14, 0xc002923780, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc0030b2500, 0x495020e, 0x14, 0xc00214ab60, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0030b2500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc0030b2500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0030b2500, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [1.393 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 7 19:30:41.494: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:41.505: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1216 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 Apr 7 19:30:41.613: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:41.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1216" for this suite. S [SKIPPING] [1.442 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 7 19:30:41.643: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:41.652: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-8799 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:41.759: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:41.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8799" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:41.769342 52 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 173 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0026e27d0, 0xc0004d2000, 0x7f9791742008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0026e28c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0031d1020, 0xc0026e28c8, 0xc0031d1020, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0026e28c8, 0x452108, 0xc0026e28b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x84, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0033c6330, 0x25, 0xc002fc69c0, 0xc0030d0600) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00056c300, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0026e36c8, 0xc0016502d0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0016502d0, 0x0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0016502d0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0025f6000, 0xc0016502d0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0025f6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0025f6000, 0xc0025d4030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b42d0, 0x7f978f2610b8, 0xc0030bd400, 0x495020e, 0x14, 0xc002ee0930, 0x3, 0x3, 0x529bcc0, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002867a40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002862980, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0030bd400, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [1.539 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools Apr 7 19:30:41.794: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:41.807: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-pools-6913 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Apr 7 19:30:41.913: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:41.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-6913" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.588 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not ) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:41.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Apr 7 19:30:42.394: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:42.402: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in localssd-3784 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Apr 7 19:30:42.511: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:42.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-3784" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.409 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not ) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 7 19:30:40.746: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.756: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-8552 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:42.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8552" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:41.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-9084 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:42.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9084" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:42.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-5808 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:43.148: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:43.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5808" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:43.157113 52 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 173 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0036627d0, 0xc0037e6000, 0x7f97917426d0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0036628c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009e9640, 0xc0036628c8, 0xc0009e9640, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0036628c8, 0x452108, 0xc0036628b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x7e, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00280c150, 0x25, 0xc0015063c0, 0xc000d0f200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00056c300, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0036636c8, 0xc0016501e0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0016501e0, 0x0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0016501e0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0025f6000, 0xc0016501e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0025f6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0025f6000, 0xc0025d4030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b42d0, 0x7f978f2610b8, 0xc0030bd400, 0x495020e, 0x14, 0xc002ee0930, 0x3, 0x3, 0x529bcc0, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002867a40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002862980, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0030bd400, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.966 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:43.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-1523 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:43.697: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:43.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1523" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:43.707594 52 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 173 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0036627d0, 0xc000160000, 0x7f9791743b28) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0036628c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000aa4ba0, 0xc0036628c8, 0xc000aa4ba0, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0036628c8, 0x452108, 0xc0036628b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x83, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00370c630, 0x25, 0xc001be9560, 0xc001ccb200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000bc54a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00056c300, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0036636c8, 0xc0016503c0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0016503c0, 0x0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0016503c0, 0x51d23a0, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0025f6000, 0xc0016503c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0025f6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0025f6000, 0xc0025d4030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b42d0, 0x7f978f2610b8, 0xc0030bd400, 0x495020e, 0x14, 0xc002ee0930, 0x3, 0x3, 0x529bcc0, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002867a40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc0030bd400, 0x495020e, 0x14, 0xc002862980, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc0030bd400) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0030bd400, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.515 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 7 19:30:39.741: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.748: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9723 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:376 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:49.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9723" for this suite. • [SLOW TEST:10.198 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:376 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 7 19:30:39.840: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.849: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5795 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:170 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 19:30:50.008: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5795" for this suite. • [SLOW TEST:10.201 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:170 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:49.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in autoscaling-1152 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 7 19:30:50.069: INFO: Only supported for providers [gce gke kubemark] (not ) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1152" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0407 19:30:50.078190 41 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 172 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x4032680, 0x798f6e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x4032680, 0x798f6e0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0xc0016347d0, 0xc000a8e380, 0x7f0173d1f008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:186 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0016348c8, 0xc69e00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00350fb60, 0xc0016348c8, 0xc00350fb60, 0x504ce8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0016348c8, 0x452108, 0xc0016348b0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7b4c620, 0x84, 0x4e7777) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0xa2 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0007ebbc0, 0x25, 0xc0035585a0, 0xc000bec600) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:157 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:47 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0xf9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000379bc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000379bc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000ffaa20, 0x51d23a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0016356c8, 0xc0022284b0, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x344 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0022284b0, 0x0, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x2e6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0022284b0, 0x51d23a0, 0xc0001d48c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc002de0000, 0xc0022284b0, 0x10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc002de0000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc002de0000, 0xc002dda030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b42d0, 0x7f0170f3f688, 0xc001fe0000, 0x495020e, 0x14, 0xc00250cfc0, 0x3, 0x3, 0x529bcc0, 0xc0001d48c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6ae0, 0xc001fe0000, 0x495020e, 0x14, 0xc002047a40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6ae0, 0xc001fe0000, 0x495020e, 0x14, 0xc001c91c40, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001fe0000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc001fe0000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc001fe0000, 0x4afadb8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 S [SKIPPING] in Spec Setup (BeforeEach) [0.132 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not ) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 7 19:30:50.227: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 7 19:30:39.917: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.925: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9681 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:51.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9681" for this suite. • [SLOW TEST:11.195 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":47,"failed":0} Apr 7 19:30:51.095: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 7 19:30:39.963: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.971: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4839 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:371 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:51.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4839" for this suite. • [SLOW TEST:11.192 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:371 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":62,"failed":0} Apr 7 19:30:51.138: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:39.947: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.958: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6365 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Apr 7 19:30:40.076: INFO: Waiting up to 5m0s for pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0" in namespace "security-context-test-6365" to be "Succeeded or Failed" Apr 7 19:30:40.078: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080201ms Apr 7 19:30:42.081: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00456961s Apr 7 19:30:44.083: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007099617s Apr 7 19:30:46.086: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009775573s Apr 7 19:30:48.089: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012641728s Apr 7 19:30:50.091: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015021473s Apr 7 19:30:52.095: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.018811063s Apr 7 19:30:52.095: INFO: Pod "busybox-user-0-b1f00780-3415-4b56-a55e-7f2649672ef0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:52.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6365" for this suite. • [SLOW TEST:12.177 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":46,"failed":0} Apr 7 19:30:52.106: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 7 19:30:39.993: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.001: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-5666 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5666" for this suite. • [SLOW TEST:12.265 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:39.937: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.948: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3482 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Apr 7 19:30:40.066: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3482" to be "Succeeded or Failed" Apr 7 19:30:40.069: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460313ms Apr 7 19:30:42.071: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005155665s Apr 7 19:30:44.075: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008359188s Apr 7 19:30:46.077: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011245954s Apr 7 19:30:48.080: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013942192s Apr 7 19:30:50.083: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016301707s Apr 7 19:30:52.086: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.019922381s Apr 7 19:30:52.086: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3482" for this suite. • [SLOW TEST:12.318 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":1,"skipped":61,"failed":0} Apr 7 19:30:52.237: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":48,"failed":0} Apr 7 19:30:52.237: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:40.895: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.903: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-5377 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Apr 7 19:30:41.021: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480" in namespace "security-context-test-5377" to be "Succeeded or Failed" Apr 7 19:30:41.022: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 1.883336ms Apr 7 19:30:43.025: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004354979s Apr 7 19:30:45.027: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006913452s Apr 7 19:30:47.030: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009600909s Apr 7 19:30:49.033: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012271903s Apr 7 19:30:51.037: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016762217s Apr 7 19:30:53.040: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019294306s Apr 7 19:30:55.045: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.024906091s Apr 7 19:30:55.046: INFO: Pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480" satisfied condition "Succeeded or Failed" Apr 7 19:30:55.051: INFO: Got logs for pod "busybox-privileged-true-2f17682c-c5ff-46cd-a0eb-abda31aa6480": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5377" for this suite. • [SLOW TEST:14.911 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":149,"failed":0} Apr 7 19:30:55.060: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 7 19:30:40.694: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.703: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7994 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:387 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:55.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7994" for this suite. • [SLOW TEST:15.841 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:265 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:387 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":103,"failed":0} Apr 7 19:30:55.899: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-4995 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:115 STEP: creating secret and pod Apr 7 19:30:41.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-4995' Apr 7 19:30:41.791: INFO: stderr: "" Apr 7 19:30:41.792: INFO: stdout: "secret/test-secret created\n" Apr 7 19:30:41.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-4995' Apr 7 19:30:42.044: INFO: stderr: "" Apr 7 19:30:42.044: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Apr 7 19:30:56.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs secret-test-pod test-container --namespace=examples-4995' Apr 7 19:30:56.246: INFO: stderr: "" Apr 7 19:30:56.246: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:56.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4995" for this suite. • [SLOW TEST:16.080 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:115 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":121,"failed":0} Apr 7 19:30:56.255: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:42.193: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:42.202: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7467 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Apr 7 19:30:42.321: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b" in namespace "security-context-test-7467" to be "Succeeded or Failed" Apr 7 19:30:42.324: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969205ms Apr 7 19:30:44.330: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008687523s Apr 7 19:30:46.334: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012588737s Apr 7 19:30:48.337: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015731132s Apr 7 19:30:50.340: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018854018s Apr 7 19:30:52.344: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022331482s Apr 7 19:30:54.348: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026867543s Apr 7 19:30:56.353: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b": Phase="Failed", Reason="", readiness=false. Elapsed: 14.031451834s Apr 7 19:30:56.353: INFO: Pod "busybox-readonly-true-397acae2-9ddb-4b73-b091-f4bd810c510b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:56.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7467" for this suite. • [SLOW TEST:15.925 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":325,"failed":0} Apr 7 19:30:56.364: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:41.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-9953 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:56.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9953" for this suite. • [SLOW TEST:15.421 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":2,"skipped":201,"failed":0} Apr 7 19:30:56.686: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:44.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-8739 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:136 STEP: creating the pod Apr 7 19:30:44.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-8739' Apr 7 19:30:44.598: INFO: stderr: "" Apr 7 19:30:44.598: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Apr 7 19:30:56.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs dapi-test-pod test-container --namespace=examples-8739' Apr 7 19:30:56.748: INFO: stderr: "" Apr 7 19:30:56.748: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8739\nMY_POD_IP=10.244.4.40\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Apr 7 19:30:56.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs dapi-test-pod test-container --namespace=examples-8739' Apr 7 19:30:56.880: INFO: stderr: "" Apr 7 19:30:56.880: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8739\nMY_POD_IP=10.244.4.40\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:56.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-8739" for this suite. • [SLOW TEST:12.777 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:136 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":737,"failed":0} Apr 7 19:30:56.889: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:42.094: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:42.101: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-555 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Apr 7 19:30:42.220: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff" in namespace "security-context-test-555" to be "Succeeded or Failed" Apr 7 19:30:42.225: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300795ms Apr 7 19:30:44.227: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007134336s Apr 7 19:30:46.232: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011347056s Apr 7 19:30:48.237: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016635028s Apr 7 19:30:50.239: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019180434s Apr 7 19:30:52.242: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021697242s Apr 7 19:30:54.246: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025278372s Apr 7 19:30:56.249: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Running", Reason="", readiness=true. Elapsed: 14.028456985s Apr 7 19:30:58.252: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.031919482s Apr 7 19:30:58.252: INFO: Pod "alpine-nnp-true-fdfea7d5-6a80-4421-8229-123069d190ff" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:58.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-555" for this suite. • [SLOW TEST:17.886 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":290,"failed":0} Apr 7 19:30:58.270: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 7 19:30:42.243: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:42.251: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-5539 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Apr 7 19:30:42.369: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5539" to be "Succeeded or Failed" Apr 7 19:30:42.371: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.931894ms Apr 7 19:30:44.374: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004428802s Apr 7 19:30:46.378: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008186535s Apr 7 19:30:48.382: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012327089s Apr 7 19:30:50.385: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015270904s Apr 7 19:30:52.388: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018731963s Apr 7 19:30:54.392: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022478569s Apr 7 19:30:56.396: INFO: Pod "implicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 14.027032834s Apr 7 19:30:58.400: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.030456333s Apr 7 19:30:58.400: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:58.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5539" for this suite. • [SLOW TEST:17.937 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":347,"failed":0} Apr 7 19:30:58.416: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:41.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-998 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:59.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-998" for this suite. • [SLOW TEST:17.287 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":1,"skipped":273,"failed":0} Apr 7 19:30:59.023: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:43.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8471 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:30:59.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8471" for this suite. • [SLOW TEST:16.535 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":241,"failed":0} Apr 7 19:30:59.621: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:42.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8636 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Apr 7 19:30:43.409: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776" in namespace "security-context-test-8636" to be "Succeeded or Failed" Apr 7 19:30:43.412: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769276ms Apr 7 19:30:45.414: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005252304s Apr 7 19:30:47.417: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007880739s Apr 7 19:30:49.422: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013547808s Apr 7 19:30:51.426: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017416286s Apr 7 19:30:53.429: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020497901s Apr 7 19:30:55.433: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024009445s Apr 7 19:30:57.436: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026953588s Apr 7 19:30:59.440: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03098086s Apr 7 19:31:01.442: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.033471982s Apr 7 19:31:01.442: INFO: Pod "alpine-nnp-nil-20604216-2682-452c-b835-8c4808660776" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:31:01.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8636" for this suite. • [SLOW TEST:18.679 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":926,"failed":0} Apr 7 19:31:01.457: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:41.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-privileged-pod-2425 STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Apr 7 19:31:05.070: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2425 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 19:31:05.070: INFO: >>> kubeConfig: /root/.kube/config Apr 7 19:31:05.207: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-2425 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 19:31:05.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Apr 7 19:31:05.820: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2425 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 19:31:05.820: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:31:05.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-2425" for this suite. • [SLOW TEST:23.944 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":297,"failed":0} Apr 7 19:31:05.926: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 7 19:30:39.699: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.707: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6500 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-a6e6a2b7-5a96-46ae-a6be-a9c68f48d5c7 in namespace container-probe-6500 Apr 7 19:30:49.831: INFO: Started pod liveness-a6e6a2b7-5a96-46ae-a6be-a9c68f48d5c7 in namespace container-probe-6500 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 19:30:49.833: INFO: Initial restart count of pod liveness-a6e6a2b7-5a96-46ae-a6be-a9c68f48d5c7 is 0 Apr 7 19:31:09.877: INFO: Restart count of pod container-probe-6500/liveness-a6e6a2b7-5a96-46ae-a6be-a9c68f48d5c7 is now 1 (20.043172662s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:31:09.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6500" for this suite. • [SLOW TEST:30.211 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":4,"failed":0} Apr 7 19:31:09.891: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:42.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5252 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:790 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:31:11.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5252" for this suite. • [SLOW TEST:28.659 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:790 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":2,"skipped":222,"failed":0} Apr 7 19:31:11.604: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:40.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Apr 7 19:30:40.593: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.601: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in examples-136 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Apr 7 19:30:40.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-136' Apr 7 19:30:41.173: INFO: stderr: "" Apr 7 19:30:41.173: INFO: stdout: "pod/liveness-exec created\n" Apr 7 19:30:41.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=examples-136' Apr 7 19:30:41.430: INFO: stderr: "" Apr 7 19:30:41.430: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Apr 7 19:30:57.439: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:30:57.439: INFO: Pod: liveness-http, restart count:0 Apr 7 19:30:59.442: INFO: Pod: liveness-http, restart count:0 Apr 7 19:30:59.442: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:01.444: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:01.444: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:03.447: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:03.450: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:05.450: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:05.452: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:07.453: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:07.455: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:09.456: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:09.458: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:11.458: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:11.460: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:13.462: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:13.463: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:15.468: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:15.468: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:17.472: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:17.472: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:19.475: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:19.478: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:21.479: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:21.481: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:23.485: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:23.487: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:25.489: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:25.489: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:27.494: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:27.495: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:29.497: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:29.498: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:31.500: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:31.500: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:33.503: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:33.503: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:35.507: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:35.507: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:37.511: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:37.511: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:39.514: INFO: Pod: liveness-http, restart count:0 Apr 7 19:31:39.514: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:41.517: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:41.517: INFO: Pod: liveness-http, restart count:1 Apr 7 19:31:41.517: INFO: Saw liveness-http restart, succeeded... Apr 7 19:31:43.521: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:45.524: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:47.529: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:49.531: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:51.536: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:53.539: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:55.543: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:57.546: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:31:59.549: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:32:01.552: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:32:03.556: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:32:05.560: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:32:07.566: INFO: Pod: liveness-exec, restart count:0 Apr 7 19:32:09.569: INFO: Pod: liveness-exec, restart count:1 Apr 7 19:32:09.569: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:32:09.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-136" for this suite. • [SLOW TEST:89.532 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4663 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-090d08ca-48ba-4862-81e7-a44e42a93ca1 in namespace container-probe-4663 Apr 7 19:30:50.030: INFO: Started pod liveness-090d08ca-48ba-4862-81e7-a44e42a93ca1 in namespace container-probe-4663 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 19:30:50.032: INFO: Initial restart count of pod liveness-090d08ca-48ba-4862-81e7-a44e42a93ca1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:34:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4663" for this suite. • [SLOW TEST:250.594 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":23,"failed":0} Apr 7 19:34:50.487: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 7 19:30:39.704: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:39.712: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-5118 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Apr 7 19:30:39.829: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Apr 7 19:30:40.839: INFO: node status heartbeat is unchanged for 1.003174678s, waiting for 1m20s Apr 7 19:30:41.839: INFO: node status heartbeat is unchanged for 2.00282739s, waiting for 1m20s Apr 7 19:30:42.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:30:42.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:30:43.839: INFO: node status heartbeat is unchanged for 999.645122ms, waiting for 1m20s Apr 7 19:30:44.839: INFO: node status heartbeat is unchanged for 1.999957531s, waiting for 1m20s Apr 7 19:30:45.839: INFO: node status heartbeat is unchanged for 3.000068829s, waiting for 1m20s Apr 7 19:30:46.839: INFO: node status heartbeat is unchanged for 3.999796128s, waiting for 1m20s Apr 7 19:30:47.840: INFO: node status heartbeat is unchanged for 5.000439913s, waiting for 1m20s Apr 7 19:30:48.841: INFO: node status heartbeat is unchanged for 6.001895083s, waiting for 1m20s Apr 7 19:30:49.840: INFO: node status heartbeat is unchanged for 7.000541937s, waiting for 1m20s Apr 7 19:30:50.841: INFO: node status heartbeat is unchanged for 8.001682573s, waiting for 1m20s Apr 7 19:30:51.840: INFO: node status heartbeat is unchanged for 9.000950645s, waiting for 1m20s Apr 7 19:30:52.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:30:52.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:30:53.841: INFO: node status heartbeat is unchanged for 999.534174ms, waiting for 1m20s Apr 7 19:30:54.840: INFO: node status heartbeat is unchanged for 1.999005435s, waiting for 1m20s Apr 7 19:30:55.841: INFO: node status heartbeat is unchanged for 3.000300755s, waiting for 1m20s Apr 7 19:30:56.839: INFO: node status heartbeat is unchanged for 3.997691636s, waiting for 1m20s Apr 7 19:30:57.839: INFO: node status heartbeat is unchanged for 4.99789647s, waiting for 1m20s Apr 7 19:30:58.840: INFO: node status heartbeat is unchanged for 5.999369678s, waiting for 1m20s Apr 7 19:30:59.841: INFO: node status heartbeat is unchanged for 6.999679968s, waiting for 1m20s Apr 7 19:31:00.842: INFO: node status heartbeat is unchanged for 8.00133315s, waiting for 1m20s Apr 7 19:31:01.840: INFO: node status heartbeat is unchanged for 8.998722618s, waiting for 1m20s Apr 7 19:31:02.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:02.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:30:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "0f548b230e344aca9c8a99071af2b8c9", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "aa5b9d18-7f9b-4939-a115-2d5fdb202ca5", KernelVersion: "3.10.0-1160.21.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.12", KubeletVersion: "v1.18.8", KubeProxyVersion: "v1.18.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 18 identical elements {Names: []string{"lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a", "lachlanevenson/k8s-helm:v3.2.3"}, SizeBytes: 46479395}, {Names: []string{"localhost:30500/sriov-device-plugin@sha256:0b38c711bcd3a3ce1b402e0ba69d9e43dc34eb73fddd210391b7addb1d358fa9", "nfvpe/sriov-device-plugin:latest", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44391002}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", + "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", + }, + SizeBytes: 42321438, + }, {Names: []string{"quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee", "quay.io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654"}, SizeBytes: 17463681}, {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, {Names: []string{"quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd", "quay.io/coreos/prometheus-config-reloader:v0.40.0"}, SizeBytes: 10131705}, {Names: []string{"jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2", "jimmidyson/configmap-reload:v0.3.0"}, SizeBytes: 9700438}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", + "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", + }, + SizeBytes: 6757579, + }, {Names: []string{"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb", "appropriate/curl:edge"}, SizeBytes: 5654234}, {Names: []string{"alpine@sha256:834e9309b5ef0f78d8d20ef0652e7b0272fe97b5baf45720e1b830eaf013cc1b", "alpine:3.12"}, SizeBytes: 5577287}, + { + Names: []string{ + "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", + "busybox:1.29", + }, + SizeBytes: 1154361, + }, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f", "k8s.gcr.io/pause:3.2"}, SizeBytes: 682696}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Apr 7 19:31:03.840: INFO: node status heartbeat is unchanged for 998.526804ms, waiting for 1m20s Apr 7 19:31:04.839: INFO: node status heartbeat is unchanged for 1.997933663s, waiting for 1m20s Apr 7 19:31:05.840: INFO: node status heartbeat is unchanged for 2.999189232s, waiting for 1m20s Apr 7 19:31:06.840: INFO: node status heartbeat is unchanged for 3.998809742s, waiting for 1m20s Apr 7 19:31:07.841: INFO: node status heartbeat is unchanged for 4.99974195s, waiting for 1m20s Apr 7 19:31:08.842: INFO: node status heartbeat is unchanged for 6.000637646s, waiting for 1m20s Apr 7 19:31:09.840: INFO: node status heartbeat is unchanged for 6.998803612s, waiting for 1m20s Apr 7 19:31:10.840: INFO: node status heartbeat is unchanged for 7.998811945s, waiting for 1m20s Apr 7 19:31:11.840: INFO: node status heartbeat is unchanged for 8.998385868s, waiting for 1m20s Apr 7 19:31:12.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:12.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:31:13.840: INFO: node status heartbeat is unchanged for 1.000936043s, waiting for 1m20s Apr 7 19:31:14.841: INFO: node status heartbeat is unchanged for 2.001858875s, waiting for 1m20s Apr 7 19:31:15.841: INFO: node status heartbeat is unchanged for 3.001310687s, waiting for 1m20s Apr 7 19:31:16.841: INFO: node status heartbeat is unchanged for 4.001587027s, waiting for 1m20s Apr 7 19:31:17.841: INFO: node status heartbeat is unchanged for 5.001430191s, waiting for 1m20s Apr 7 19:31:18.842: INFO: node status heartbeat is unchanged for 6.002284452s, waiting for 1m20s Apr 7 19:31:19.839: INFO: node status heartbeat is unchanged for 6.999578393s, waiting for 1m20s Apr 7 19:31:20.839: INFO: node status heartbeat is unchanged for 7.999654078s, waiting for 1m20s Apr 7 19:31:21.841: INFO: node status heartbeat is unchanged for 9.001333434s, waiting for 1m20s Apr 7 19:31:22.842: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:22.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:31:23.842: INFO: node status heartbeat is unchanged for 1.000112181s, waiting for 1m20s Apr 7 19:31:24.841: INFO: node status heartbeat is unchanged for 1.999415291s, waiting for 1m20s Apr 7 19:31:25.840: INFO: node status heartbeat is unchanged for 2.998733155s, waiting for 1m20s Apr 7 19:31:26.841: INFO: node status heartbeat is unchanged for 3.999054611s, waiting for 1m20s Apr 7 19:31:27.842: INFO: node status heartbeat is unchanged for 5.000653401s, waiting for 1m20s Apr 7 19:31:28.841: INFO: node status heartbeat is unchanged for 5.999459582s, waiting for 1m20s Apr 7 19:31:29.841: INFO: node status heartbeat is unchanged for 6.999558248s, waiting for 1m20s Apr 7 19:31:30.841: INFO: node status heartbeat is unchanged for 7.999160875s, waiting for 1m20s Apr 7 19:31:31.841: INFO: node status heartbeat is unchanged for 8.999154935s, waiting for 1m20s Apr 7 19:31:32.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:32.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:31:33.839: INFO: node status heartbeat is unchanged for 998.630572ms, waiting for 1m20s Apr 7 19:31:34.840: INFO: node status heartbeat is unchanged for 1.999827844s, waiting for 1m20s Apr 7 19:31:35.839: INFO: node status heartbeat is unchanged for 2.998565324s, waiting for 1m20s Apr 7 19:31:36.839: INFO: node status heartbeat is unchanged for 3.99871503s, waiting for 1m20s Apr 7 19:31:37.840: INFO: node status heartbeat is unchanged for 4.999965118s, waiting for 1m20s Apr 7 19:31:38.841: INFO: node status heartbeat is unchanged for 6.000522392s, waiting for 1m20s Apr 7 19:31:39.840: INFO: node status heartbeat is unchanged for 6.999635624s, waiting for 1m20s Apr 7 19:31:40.840: INFO: node status heartbeat is unchanged for 7.999958s, waiting for 1m20s Apr 7 19:31:41.840: INFO: node status heartbeat is unchanged for 8.999637601s, waiting for 1m20s Apr 7 19:31:42.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:42.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:31:43.839: INFO: node status heartbeat is unchanged for 999.575687ms, waiting for 1m20s Apr 7 19:31:44.840: INFO: node status heartbeat is unchanged for 2.000219697s, waiting for 1m20s Apr 7 19:31:45.840: INFO: node status heartbeat is unchanged for 3.000214136s, waiting for 1m20s Apr 7 19:31:46.841: INFO: node status heartbeat is unchanged for 4.00135453s, waiting for 1m20s Apr 7 19:31:47.840: INFO: node status heartbeat is unchanged for 5.00029628s, waiting for 1m20s Apr 7 19:31:48.842: INFO: node status heartbeat is unchanged for 6.00196738s, waiting for 1m20s Apr 7 19:31:49.839: INFO: node status heartbeat is unchanged for 6.999846324s, waiting for 1m20s Apr 7 19:31:50.840: INFO: node status heartbeat is unchanged for 8.000234383s, waiting for 1m20s Apr 7 19:31:51.840: INFO: node status heartbeat is unchanged for 9.000298385s, waiting for 1m20s Apr 7 19:31:52.840: INFO: node status heartbeat is unchanged for 10.000543788s, waiting for 1m20s Apr 7 19:31:53.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:31:53.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:31:54.841: INFO: node status heartbeat is unchanged for 999.591562ms, waiting for 1m20s Apr 7 19:31:55.841: INFO: node status heartbeat is unchanged for 2.000155656s, waiting for 1m20s Apr 7 19:31:56.839: INFO: node status heartbeat is unchanged for 2.997843924s, waiting for 1m20s Apr 7 19:31:57.839: INFO: node status heartbeat is unchanged for 3.99776007s, waiting for 1m20s Apr 7 19:31:58.841: INFO: node status heartbeat is unchanged for 4.999434388s, waiting for 1m20s Apr 7 19:31:59.840: INFO: node status heartbeat is unchanged for 5.998665794s, waiting for 1m20s Apr 7 19:32:00.840: INFO: node status heartbeat is unchanged for 6.998299573s, waiting for 1m20s Apr 7 19:32:01.840: INFO: node status heartbeat is unchanged for 7.998692616s, waiting for 1m20s Apr 7 19:32:02.839: INFO: node status heartbeat is unchanged for 8.997726627s, waiting for 1m20s Apr 7 19:32:03.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:32:03.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:31:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:04.839: INFO: node status heartbeat is unchanged for 999.516999ms, waiting for 1m20s Apr 7 19:32:05.840: INFO: node status heartbeat is unchanged for 2.000108107s, waiting for 1m20s Apr 7 19:32:06.840: INFO: node status heartbeat is unchanged for 2.999946008s, waiting for 1m20s Apr 7 19:32:07.842: INFO: node status heartbeat is unchanged for 4.002133646s, waiting for 1m20s Apr 7 19:32:08.840: INFO: node status heartbeat is unchanged for 4.999974541s, waiting for 1m20s Apr 7 19:32:09.840: INFO: node status heartbeat is unchanged for 6.000452158s, waiting for 1m20s Apr 7 19:32:10.840: INFO: node status heartbeat is unchanged for 7.000155302s, waiting for 1m20s Apr 7 19:32:11.839: INFO: node status heartbeat is unchanged for 7.998918003s, waiting for 1m20s Apr 7 19:32:12.839: INFO: node status heartbeat is unchanged for 8.999258397s, waiting for 1m20s Apr 7 19:32:13.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:32:13.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:14.840: INFO: node status heartbeat is unchanged for 1.000053589s, waiting for 1m20s Apr 7 19:32:15.839: INFO: node status heartbeat is unchanged for 1.999708964s, waiting for 1m20s Apr 7 19:32:16.840: INFO: node status heartbeat is unchanged for 3.000539234s, waiting for 1m20s Apr 7 19:32:17.840: INFO: node status heartbeat is unchanged for 4.000718989s, waiting for 1m20s Apr 7 19:32:18.842: INFO: node status heartbeat is unchanged for 5.002524166s, waiting for 1m20s Apr 7 19:32:19.841: INFO: node status heartbeat is unchanged for 6.001172023s, waiting for 1m20s Apr 7 19:32:20.841: INFO: node status heartbeat is unchanged for 7.000925195s, waiting for 1m20s Apr 7 19:32:21.840: INFO: node status heartbeat is unchanged for 8.000108817s, waiting for 1m20s Apr 7 19:32:22.841: INFO: node status heartbeat is unchanged for 9.0011495s, waiting for 1m20s Apr 7 19:32:23.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:32:23.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:24.840: INFO: node status heartbeat is unchanged for 1.000255717s, waiting for 1m20s Apr 7 19:32:25.840: INFO: node status heartbeat is unchanged for 1.99994364s, waiting for 1m20s Apr 7 19:32:26.839: INFO: node status heartbeat is unchanged for 2.999417589s, waiting for 1m20s Apr 7 19:32:27.840: INFO: node status heartbeat is unchanged for 4.000150726s, waiting for 1m20s Apr 7 19:32:28.839: INFO: node status heartbeat is unchanged for 4.999391501s, waiting for 1m20s Apr 7 19:32:29.840: INFO: node status heartbeat is unchanged for 6.000631544s, waiting for 1m20s Apr 7 19:32:30.840: INFO: node status heartbeat is unchanged for 7.000039736s, waiting for 1m20s Apr 7 19:32:31.840: INFO: node status heartbeat is unchanged for 8.00043748s, waiting for 1m20s Apr 7 19:32:32.841: INFO: node status heartbeat is unchanged for 9.00088632s, waiting for 1m20s Apr 7 19:32:33.841: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 7 19:32:33.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:34.841: INFO: node status heartbeat is unchanged for 999.459147ms, waiting for 1m20s Apr 7 19:32:35.841: INFO: node status heartbeat is unchanged for 1.999317117s, waiting for 1m20s Apr 7 19:32:36.839: INFO: node status heartbeat is unchanged for 2.997887208s, waiting for 1m20s Apr 7 19:32:37.840: INFO: node status heartbeat is unchanged for 3.998354464s, waiting for 1m20s Apr 7 19:32:38.842: INFO: node status heartbeat is unchanged for 5.00051505s, waiting for 1m20s Apr 7 19:32:39.840: INFO: node status heartbeat is unchanged for 5.998712096s, waiting for 1m20s Apr 7 19:32:40.841: INFO: node status heartbeat is unchanged for 6.999446737s, waiting for 1m20s Apr 7 19:32:41.839: INFO: node status heartbeat is unchanged for 7.997171743s, waiting for 1m20s Apr 7 19:32:42.839: INFO: node status heartbeat is unchanged for 8.997911956s, waiting for 1m20s Apr 7 19:32:43.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:32:43.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:44.840: INFO: node status heartbeat is unchanged for 1.000153423s, waiting for 1m20s Apr 7 19:32:45.840: INFO: node status heartbeat is unchanged for 2.000126337s, waiting for 1m20s Apr 7 19:32:46.841: INFO: node status heartbeat is unchanged for 3.000939005s, waiting for 1m20s Apr 7 19:32:47.840: INFO: node status heartbeat is unchanged for 4.000326905s, waiting for 1m20s Apr 7 19:32:48.839: INFO: node status heartbeat is unchanged for 4.999407064s, waiting for 1m20s Apr 7 19:32:49.839: INFO: node status heartbeat is unchanged for 5.999172507s, waiting for 1m20s Apr 7 19:32:50.840: INFO: node status heartbeat is unchanged for 7.000367004s, waiting for 1m20s Apr 7 19:32:51.839: INFO: node status heartbeat is unchanged for 7.999546544s, waiting for 1m20s Apr 7 19:32:52.839: INFO: node status heartbeat is unchanged for 8.99879493s, waiting for 1m20s Apr 7 19:32:53.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:32:53.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:32:54.840: INFO: node status heartbeat is unchanged for 999.768226ms, waiting for 1m20s Apr 7 19:32:55.840: INFO: node status heartbeat is unchanged for 2.000433235s, waiting for 1m20s Apr 7 19:32:56.841: INFO: node status heartbeat is unchanged for 3.000937134s, waiting for 1m20s Apr 7 19:32:57.841: INFO: node status heartbeat is unchanged for 4.000697639s, waiting for 1m20s Apr 7 19:32:58.841: INFO: node status heartbeat is unchanged for 5.001486736s, waiting for 1m20s Apr 7 19:32:59.841: INFO: node status heartbeat is unchanged for 6.00145695s, waiting for 1m20s Apr 7 19:33:00.842: INFO: node status heartbeat is unchanged for 7.001939783s, waiting for 1m20s Apr 7 19:33:01.841: INFO: node status heartbeat is unchanged for 8.000652634s, waiting for 1m20s Apr 7 19:33:02.841: INFO: node status heartbeat is unchanged for 9.000879864s, waiting for 1m20s Apr 7 19:33:03.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:03.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:04.841: INFO: node status heartbeat is unchanged for 1.000650764s, waiting for 1m20s Apr 7 19:33:05.841: INFO: node status heartbeat is unchanged for 2.0003251s, waiting for 1m20s Apr 7 19:33:06.842: INFO: node status heartbeat is unchanged for 3.001648457s, waiting for 1m20s Apr 7 19:33:07.841: INFO: node status heartbeat is unchanged for 4.000417698s, waiting for 1m20s Apr 7 19:33:08.840: INFO: node status heartbeat is unchanged for 4.998996307s, waiting for 1m20s Apr 7 19:33:09.840: INFO: node status heartbeat is unchanged for 5.999136835s, waiting for 1m20s Apr 7 19:33:10.842: INFO: node status heartbeat is unchanged for 7.001072681s, waiting for 1m20s Apr 7 19:33:11.842: INFO: node status heartbeat is unchanged for 8.001881946s, waiting for 1m20s Apr 7 19:33:12.841: INFO: node status heartbeat is unchanged for 9.00068036s, waiting for 1m20s Apr 7 19:33:13.842: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:13.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:14.841: INFO: node status heartbeat is unchanged for 999.70642ms, waiting for 1m20s Apr 7 19:33:15.841: INFO: node status heartbeat is unchanged for 1.999929644s, waiting for 1m20s Apr 7 19:33:16.841: INFO: node status heartbeat is unchanged for 2.999508129s, waiting for 1m20s Apr 7 19:33:17.840: INFO: node status heartbeat is unchanged for 3.998651604s, waiting for 1m20s Apr 7 19:33:18.840: INFO: node status heartbeat is unchanged for 4.998151312s, waiting for 1m20s Apr 7 19:33:19.840: INFO: node status heartbeat is unchanged for 5.99819905s, waiting for 1m20s Apr 7 19:33:20.839: INFO: node status heartbeat is unchanged for 6.997150527s, waiting for 1m20s Apr 7 19:33:21.841: INFO: node status heartbeat is unchanged for 7.999377348s, waiting for 1m20s Apr 7 19:33:22.841: INFO: node status heartbeat is unchanged for 8.999800876s, waiting for 1m20s Apr 7 19:33:23.843: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:23.845: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:24.842: INFO: node status heartbeat is unchanged for 999.865289ms, waiting for 1m20s Apr 7 19:33:25.840: INFO: node status heartbeat is unchanged for 1.996999769s, waiting for 1m20s Apr 7 19:33:26.842: INFO: node status heartbeat is unchanged for 2.999033627s, waiting for 1m20s Apr 7 19:33:27.841: INFO: node status heartbeat is unchanged for 3.99861951s, waiting for 1m20s Apr 7 19:33:28.843: INFO: node status heartbeat is unchanged for 4.999976517s, waiting for 1m20s Apr 7 19:33:29.841: INFO: node status heartbeat is unchanged for 5.997989696s, waiting for 1m20s Apr 7 19:33:30.842: INFO: node status heartbeat is unchanged for 6.999213953s, waiting for 1m20s Apr 7 19:33:31.839: INFO: node status heartbeat is unchanged for 7.996500931s, waiting for 1m20s Apr 7 19:33:32.842: INFO: node status heartbeat is unchanged for 8.998926123s, waiting for 1m20s Apr 7 19:33:33.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:33.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:34.840: INFO: node status heartbeat is unchanged for 999.289281ms, waiting for 1m20s Apr 7 19:33:35.841: INFO: node status heartbeat is unchanged for 1.999947862s, waiting for 1m20s Apr 7 19:33:36.839: INFO: node status heartbeat is unchanged for 2.998542565s, waiting for 1m20s Apr 7 19:33:37.840: INFO: node status heartbeat is unchanged for 3.998837704s, waiting for 1m20s Apr 7 19:33:38.841: INFO: node status heartbeat is unchanged for 4.999811599s, waiting for 1m20s Apr 7 19:33:39.840: INFO: node status heartbeat is unchanged for 5.998946822s, waiting for 1m20s Apr 7 19:33:40.839: INFO: node status heartbeat is unchanged for 6.998359486s, waiting for 1m20s Apr 7 19:33:41.840: INFO: node status heartbeat is unchanged for 7.999298176s, waiting for 1m20s Apr 7 19:33:42.842: INFO: node status heartbeat is unchanged for 9.000948809s, waiting for 1m20s Apr 7 19:33:43.842: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:43.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:44.842: INFO: node status heartbeat is unchanged for 1.000031632s, waiting for 1m20s Apr 7 19:33:45.840: INFO: node status heartbeat is unchanged for 1.997940138s, waiting for 1m20s Apr 7 19:33:46.841: INFO: node status heartbeat is unchanged for 2.999163325s, waiting for 1m20s Apr 7 19:33:47.840: INFO: node status heartbeat is unchanged for 3.997808444s, waiting for 1m20s Apr 7 19:33:48.841: INFO: node status heartbeat is unchanged for 4.999115907s, waiting for 1m20s Apr 7 19:33:49.840: INFO: node status heartbeat is unchanged for 5.997922345s, waiting for 1m20s Apr 7 19:33:50.840: INFO: node status heartbeat is unchanged for 6.998594163s, waiting for 1m20s Apr 7 19:33:51.839: INFO: node status heartbeat is unchanged for 7.997530529s, waiting for 1m20s Apr 7 19:33:52.842: INFO: node status heartbeat is unchanged for 9.000252792s, waiting for 1m20s Apr 7 19:33:53.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:33:53.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:33:54.840: INFO: node status heartbeat is unchanged for 1.000098539s, waiting for 1m20s Apr 7 19:33:55.840: INFO: node status heartbeat is unchanged for 2.000225919s, waiting for 1m20s Apr 7 19:33:56.840: INFO: node status heartbeat is unchanged for 2.999685523s, waiting for 1m20s Apr 7 19:33:57.839: INFO: node status heartbeat is unchanged for 3.999279236s, waiting for 1m20s Apr 7 19:33:58.839: INFO: node status heartbeat is unchanged for 4.998900631s, waiting for 1m20s Apr 7 19:33:59.840: INFO: node status heartbeat is unchanged for 5.99949066s, waiting for 1m20s Apr 7 19:34:00.839: INFO: node status heartbeat is unchanged for 6.998734236s, waiting for 1m20s Apr 7 19:34:01.839: INFO: node status heartbeat is unchanged for 7.99883484s, waiting for 1m20s Apr 7 19:34:02.841: INFO: node status heartbeat is unchanged for 9.000765357s, waiting for 1m20s Apr 7 19:34:03.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:03.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:33:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:04.839: INFO: node status heartbeat is unchanged for 999.367486ms, waiting for 1m20s Apr 7 19:34:05.839: INFO: node status heartbeat is unchanged for 1.999924637s, waiting for 1m20s Apr 7 19:34:06.840: INFO: node status heartbeat is unchanged for 3.001053877s, waiting for 1m20s Apr 7 19:34:07.841: INFO: node status heartbeat is unchanged for 4.001664813s, waiting for 1m20s Apr 7 19:34:08.841: INFO: node status heartbeat is unchanged for 5.001444494s, waiting for 1m20s Apr 7 19:34:09.841: INFO: node status heartbeat is unchanged for 6.0014943s, waiting for 1m20s Apr 7 19:34:10.840: INFO: node status heartbeat is unchanged for 7.000572638s, waiting for 1m20s Apr 7 19:34:11.840: INFO: node status heartbeat is unchanged for 8.000393025s, waiting for 1m20s Apr 7 19:34:12.840: INFO: node status heartbeat is unchanged for 9.000343144s, waiting for 1m20s Apr 7 19:34:13.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:13.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:14.839: INFO: node status heartbeat is unchanged for 999.784413ms, waiting for 1m20s Apr 7 19:34:15.839: INFO: node status heartbeat is unchanged for 1.999485307s, waiting for 1m20s Apr 7 19:34:16.840: INFO: node status heartbeat is unchanged for 2.999964537s, waiting for 1m20s Apr 7 19:34:17.841: INFO: node status heartbeat is unchanged for 4.001020282s, waiting for 1m20s Apr 7 19:34:18.842: INFO: node status heartbeat is unchanged for 5.001877909s, waiting for 1m20s Apr 7 19:34:19.840: INFO: node status heartbeat is unchanged for 6.000575226s, waiting for 1m20s Apr 7 19:34:20.841: INFO: node status heartbeat is unchanged for 7.001065806s, waiting for 1m20s Apr 7 19:34:21.841: INFO: node status heartbeat is unchanged for 8.001240218s, waiting for 1m20s Apr 7 19:34:22.841: INFO: node status heartbeat is unchanged for 9.001281859s, waiting for 1m20s Apr 7 19:34:23.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:23.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:24.841: INFO: node status heartbeat is unchanged for 1.001111184s, waiting for 1m20s Apr 7 19:34:25.841: INFO: node status heartbeat is unchanged for 2.000487376s, waiting for 1m20s Apr 7 19:34:26.841: INFO: node status heartbeat is unchanged for 3.001015406s, waiting for 1m20s Apr 7 19:34:27.842: INFO: node status heartbeat is unchanged for 4.002035952s, waiting for 1m20s Apr 7 19:34:28.841: INFO: node status heartbeat is unchanged for 5.000762848s, waiting for 1m20s Apr 7 19:34:29.840: INFO: node status heartbeat is unchanged for 5.999683915s, waiting for 1m20s Apr 7 19:34:30.841: INFO: node status heartbeat is unchanged for 7.000582449s, waiting for 1m20s Apr 7 19:34:31.841: INFO: node status heartbeat is unchanged for 8.001007162s, waiting for 1m20s Apr 7 19:34:32.841: INFO: node status heartbeat is unchanged for 9.001341232s, waiting for 1m20s Apr 7 19:34:33.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:33.841: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:34.840: INFO: node status heartbeat is unchanged for 1.001492542s, waiting for 1m20s Apr 7 19:34:35.841: INFO: node status heartbeat is unchanged for 2.001935606s, waiting for 1m20s Apr 7 19:34:36.839: INFO: node status heartbeat is unchanged for 3.000541321s, waiting for 1m20s Apr 7 19:34:37.840: INFO: node status heartbeat is unchanged for 4.001529012s, waiting for 1m20s Apr 7 19:34:38.840: INFO: node status heartbeat is unchanged for 5.00098518s, waiting for 1m20s Apr 7 19:34:39.839: INFO: node status heartbeat is unchanged for 5.999886463s, waiting for 1m20s Apr 7 19:34:40.839: INFO: node status heartbeat is unchanged for 6.999822704s, waiting for 1m20s Apr 7 19:34:41.839: INFO: node status heartbeat is unchanged for 8.000458823s, waiting for 1m20s Apr 7 19:34:42.840: INFO: node status heartbeat is unchanged for 9.000762749s, waiting for 1m20s Apr 7 19:34:43.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:43.842: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:44.839: INFO: node status heartbeat is unchanged for 999.776616ms, waiting for 1m20s Apr 7 19:34:45.839: INFO: node status heartbeat is unchanged for 2.000116131s, waiting for 1m20s Apr 7 19:34:46.840: INFO: node status heartbeat is unchanged for 3.001161233s, waiting for 1m20s Apr 7 19:34:47.840: INFO: node status heartbeat is unchanged for 4.000792458s, waiting for 1m20s Apr 7 19:34:48.839: INFO: node status heartbeat is unchanged for 5.00028732s, waiting for 1m20s Apr 7 19:34:49.839: INFO: node status heartbeat is unchanged for 5.999640582s, waiting for 1m20s Apr 7 19:34:50.839: INFO: node status heartbeat is unchanged for 7.000027692s, waiting for 1m20s Apr 7 19:34:51.839: INFO: node status heartbeat is unchanged for 8.000153721s, waiting for 1m20s Apr 7 19:34:52.840: INFO: node status heartbeat is unchanged for 9.000907966s, waiting for 1m20s Apr 7 19:34:53.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:34:53.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:34:54.841: INFO: node status heartbeat is unchanged for 1.001011209s, waiting for 1m20s Apr 7 19:34:55.842: INFO: node status heartbeat is unchanged for 2.001740075s, waiting for 1m20s Apr 7 19:34:56.842: INFO: node status heartbeat is unchanged for 3.001809271s, waiting for 1m20s Apr 7 19:34:57.840: INFO: node status heartbeat is unchanged for 4.000378747s, waiting for 1m20s Apr 7 19:34:58.841: INFO: node status heartbeat is unchanged for 5.001326655s, waiting for 1m20s Apr 7 19:34:59.841: INFO: node status heartbeat is unchanged for 6.001222115s, waiting for 1m20s Apr 7 19:35:00.843: INFO: node status heartbeat is unchanged for 7.003258556s, waiting for 1m20s Apr 7 19:35:01.840: INFO: node status heartbeat is unchanged for 7.99959974s, waiting for 1m20s Apr 7 19:35:02.842: INFO: node status heartbeat is unchanged for 9.001549089s, waiting for 1m20s Apr 7 19:35:03.841: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:35:03.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:34:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:35:04.842: INFO: node status heartbeat is unchanged for 1.001061736s, waiting for 1m20s Apr 7 19:35:05.839: INFO: node status heartbeat is unchanged for 1.998629985s, waiting for 1m20s Apr 7 19:35:06.842: INFO: node status heartbeat is unchanged for 3.000974343s, waiting for 1m20s Apr 7 19:35:07.841: INFO: node status heartbeat is unchanged for 4.000563709s, waiting for 1m20s Apr 7 19:35:08.841: INFO: node status heartbeat is unchanged for 5.00057971s, waiting for 1m20s Apr 7 19:35:09.841: INFO: node status heartbeat is unchanged for 5.999771544s, waiting for 1m20s Apr 7 19:35:10.841: INFO: node status heartbeat is unchanged for 7.000233374s, waiting for 1m20s Apr 7 19:35:11.842: INFO: node status heartbeat is unchanged for 8.001495681s, waiting for 1m20s Apr 7 19:35:12.843: INFO: node status heartbeat is unchanged for 9.001963595s, waiting for 1m20s Apr 7 19:35:13.840: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:35:13.843: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:35:14.842: INFO: node status heartbeat is unchanged for 1.001664343s, waiting for 1m20s Apr 7 19:35:15.842: INFO: node status heartbeat is unchanged for 2.001392022s, waiting for 1m20s Apr 7 19:35:16.842: INFO: node status heartbeat is unchanged for 3.001804175s, waiting for 1m20s Apr 7 19:35:17.841: INFO: node status heartbeat is unchanged for 4.000716226s, waiting for 1m20s Apr 7 19:35:18.842: INFO: node status heartbeat is unchanged for 5.001373166s, waiting for 1m20s Apr 7 19:35:19.841: INFO: node status heartbeat is unchanged for 6.000806316s, waiting for 1m20s Apr 7 19:35:20.841: INFO: node status heartbeat is unchanged for 7.001019053s, waiting for 1m20s Apr 7 19:35:21.840: INFO: node status heartbeat is unchanged for 7.999528952s, waiting for 1m20s Apr 7 19:35:22.842: INFO: node status heartbeat is unchanged for 9.001713379s, waiting for 1m20s Apr 7 19:35:23.842: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:35:23.844: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:35:24.842: INFO: node status heartbeat is unchanged for 999.971275ms, waiting for 1m20s Apr 7 19:35:25.840: INFO: node status heartbeat is unchanged for 1.997704584s, waiting for 1m20s Apr 7 19:35:26.842: INFO: node status heartbeat is unchanged for 2.999919167s, waiting for 1m20s Apr 7 19:35:27.841: INFO: node status heartbeat is unchanged for 3.998726049s, waiting for 1m20s Apr 7 19:35:28.841: INFO: node status heartbeat is unchanged for 4.999326782s, waiting for 1m20s Apr 7 19:35:29.839: INFO: node status heartbeat is unchanged for 5.997461323s, waiting for 1m20s Apr 7 19:35:30.840: INFO: node status heartbeat is unchanged for 6.997690887s, waiting for 1m20s Apr 7 19:35:31.840: INFO: node status heartbeat is unchanged for 7.99807065s, waiting for 1m20s Apr 7 19:35:32.840: INFO: node status heartbeat is unchanged for 8.998462664s, waiting for 1m20s Apr 7 19:35:33.839: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 7 19:35:33.841: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 201259667456}, s: "196542644Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, "intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 178911973376}, s: "174718724Ki", Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:48:40 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-07 19:35:33 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:03 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-04-07 18:46:54 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } Apr 7 19:35:34.839: INFO: node status heartbeat is unchanged for 1.00049541s, waiting for 1m20s Apr 7 19:35:35.841: INFO: node status heartbeat is unchanged for 2.002182251s, waiting for 1m20s Apr 7 19:35:36.841: INFO: node status heartbeat is unchanged for 3.002533873s, waiting for 1m20s Apr 7 19:35:37.840: INFO: node status heartbeat is unchanged for 4.001532974s, waiting for 1m20s Apr 7 19:35:38.840: INFO: node status heartbeat is unchanged for 5.001500281s, waiting for 1m20s Apr 7 19:35:39.841: INFO: node status heartbeat is unchanged for 6.002303569s, waiting for 1m20s Apr 7 19:35:39.844: INFO: node status heartbeat is unchanged for 6.005251125s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:35:39.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5118" for this suite. • [SLOW TEST:300.170 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":1,"skipped":4,"failed":0} Apr 7 19:35:39.864: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:39.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 7 19:30:40.294: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 7 19:30:40.312: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3584 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:694 STEP: getting restart delay-0 Apr 7 19:32:43.616: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-04-07 19:31:58 +0000 UTC restartedAt=2021-04-07 19:32:42 +0000 UTC (44s) STEP: getting restart delay-1 Apr 7 19:34:18.990: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-04-07 19:32:47 +0000 UTC restartedAt=2021-04-07 19:34:17 +0000 UTC (1m30s) STEP: getting restart delay-2 Apr 7 19:37:14.701: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-04-07 19:34:22 +0000 UTC restartedAt=2021-04-07 19:37:13 +0000 UTC (2m51s) STEP: updating the image Apr 7 19:37:15.211: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Apr 7 19:37:40.276: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-04-07 19:37:25 +0000 UTC restartedAt=2021-04-07 19:37:39 +0000 UTC (14s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:37:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3584" for this suite. • [SLOW TEST:420.298 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:694 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":61,"failed":0} Apr 7 19:37:40.287: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 19:30:50.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4487 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:735 STEP: getting restart delay when capped Apr 7 19:42:36.761: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-04-07 19:37:20 +0000 UTC restartedAt=2021-04-07 19:42:35 +0000 UTC (5m15s) Apr 7 19:47:44.933: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-04-07 19:42:40 +0000 UTC restartedAt=2021-04-07 19:47:43 +0000 UTC (5m3s) Apr 7 19:52:51.193: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-04-07 19:47:48 +0000 UTC restartedAt=2021-04-07 19:52:50 +0000 UTC (5m2s) STEP: getting restart delay after a capped delay Apr 7 19:58:10.494: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-04-07 19:52:55 +0000 UTC restartedAt=2021-04-07 19:58:09 +0000 UTC (5m14s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 19:58:10.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4487" for this suite. • [SLOW TEST:1640.284 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:735 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":148,"failed":0} Apr 7 19:58:10.506: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":122,"failed":0} Apr 7 19:32:09.581: INFO: Running AfterSuite actions on all nodes Apr 7 19:58:10.565: INFO: Running AfterSuite actions on node 1 Apr 7 19:58:10.565: INFO: Skipping dumping logs from cluster Ran 30 of 4994 Specs in 1651.119 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 4964 Skipped Ginkgo ran 1 suite in 27m32.629915831s Test Suite Passed