Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620420604 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 7 20:50:05.827: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:05.831: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 7 20:50:05.857: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 20:50:05.924: INFO: The status of Pod cmk-init-discover-node1-krbjn is Succeeded, skipping waiting May 7 20:50:05.924: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 7 20:50:05.924: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 20:50:05.924: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 20:50:05.924: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 7 20:50:05.943: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 7 20:50:05.943: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 7 20:50:05.943: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 7 20:50:05.943: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 7 20:50:05.943: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 7 20:50:05.943: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 7 20:50:05.943: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 7 20:50:05.943: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 7 20:50:05.943: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 7 20:50:05.943: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 7 20:50:05.943: INFO: e2e test version: v1.19.10 May 7 20:50:05.943: INFO: kube-apiserver version: v1.19.8 May 7 20:50:05.944: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:05.949: INFO: Cluster IP family: ipv4 May 7 20:50:05.947: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:05.966: INFO: Cluster IP family: ipv4 May 7 20:50:05.955: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:05.977: INFO: Cluster IP family: ipv4 May 7 20:50:05.968: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:05.992: INFO: Cluster IP family: ipv4 May 7 20:50:05.978: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.000: INFO: Cluster IP family: ipv4 May 7 20:50:05.978: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.000: INFO: Cluster IP family: ipv4 May 7 20:50:05.989: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.005: INFO: Cluster IP family: ipv4 May 7 20:50:05.994: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.014: INFO: Cluster IP family: ipv4 May 7 20:50:06.008: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.030: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:05.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 7 20:50:06.012: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.016: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 May 7 20:50:06.018: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:06.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3170" for this suite. S [SKIPPING] [0.036 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a docker exec liveness probe with timeout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:215 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:217 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 7 20:50:06.035: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:06.049: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd May 7 20:50:06.042: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.044: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 May 7 20:50:06.046: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:06.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-8724" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling May 7 20:50:06.336: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.338: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:06.340: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2921" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:06.350723 29 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 183 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001620d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc004184750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000c3db80, 0xc004184750, 0xc000c3db80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc004184750, 0x479aa25d947e64, 0xc004184778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x82, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003dd76e0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0018a5980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0018a5980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0004052b8, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0041856c0, 0xc00294a960, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00294a960, 0x0, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00294a960, 0x52e17e0, 0xc000160900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0039a8000, 0xc00294a960, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0039a8000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0039a8000, 0xc0039a0030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00016c280, 0x7f183494ca18, 0xc00214e780, 0x4c22012, 0x14, 0xc003906360, 0x3, 0x3, 0x5396840, 0xc000160900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc00214e780, 0x4c22012, 0x14, 0xc0026d8b40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc00214e780, 0x4c22012, 0x14, 0xc000d3b280, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00214e780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00214e780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00214e780, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools May 7 20:50:06.382: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.383: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 May 7 20:50:06.385: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-9391" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling May 7 20:50:06.909: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.911: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:06.914: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:06.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-4520" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:06.925060 26 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 46 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000564750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002053d00, 0xc000564750, 0xc002053d00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000564750, 0x479aa27fd1c810, 0xc000564778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x8e, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc001cd72c0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b78a20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b78a20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000680de8, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0005656c0, 0xc003730f00, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003730f00, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003730f00, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001af43c0, 0xc003730f00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001af43c0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001af43c0, 0xc002677648) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7f13365b8458, 0xc001493c80, 0x4c22012, 0x14, 0xc0026935f0, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001493c80, 0x4c22012, 0x14, 0xc003404fc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001493c80, 0x4c22012, 0x14, 0xc00239eb20, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001493c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001493c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001493c80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:07.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:09.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6077" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:09.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:09.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1949" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":2,"skipped":524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 May 7 20:50:06.258: INFO: Waiting up to 5m0s for pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce" in namespace "security-context-test-2853" to be "Succeeded or Failed" May 7 20:50:06.261: INFO: Pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.822106ms May 7 20:50:08.265: INFO: Pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006943654s May 7 20:50:10.268: INFO: Pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010451513s May 7 20:50:12.274: INFO: Pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01565022s May 7 20:50:12.274: INFO: Pod "busybox-user-0-2afdad65-c332-409d-95a3-61c8caedffce" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:12.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2853" for this suite. • [SLOW TEST:6.059 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 7 20:50:06.874: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.875: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:12.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1124" for this suite. • [SLOW TEST:6.075 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:13.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4874" for this suite. • [SLOW TEST:7.080 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 May 7 20:50:06.687: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-7389" to be "Succeeded or Failed" May 7 20:50:06.689: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.991022ms May 7 20:50:08.692: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005485207s May 7 20:50:10.697: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009708166s May 7 20:50:12.700: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013254534s May 7 20:50:14.703: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01602009s May 7 20:50:14.703: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:14.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7389" for this suite. • [SLOW TEST:8.070 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:09.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 20:50:14.707: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:14.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1910" for this suite. • [SLOW TEST:5.072 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":243,"failed":0} S ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":3,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples May 7 20:50:06.617: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.619: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 7 20:50:06.627: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod May 7 20:50:06.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3765 create -f -' May 7 20:50:07.154: INFO: stderr: "" May 7 20:50:07.154: INFO: stdout: "secret/test-secret created\n" May 7 20:50:07.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3765 create -f -' May 7 20:50:07.570: INFO: stderr: "" May 7 20:50:07.570: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 7 20:50:15.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3765 logs secret-test-pod test-container' May 7 20:50:15.724: INFO: stderr: "" May 7 20:50:15.724: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3765" for this suite. • [SLOW TEST:9.135 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:12.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:18.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7221" for this suite. • [SLOW TEST:6.055 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":2,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:13.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 May 7 20:50:13.295: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693" in namespace "security-context-test-400" to be "Succeeded or Failed" May 7 20:50:13.300: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559313ms May 7 20:50:15.304: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491622s May 7 20:50:17.307: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011634047s May 7 20:50:19.311: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015683701s May 7 20:50:21.315: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019690066s May 7 20:50:21.315: INFO: Pod "alpine-nnp-nil-6adcb770-7f67-4267-b052-0494e5396693" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:21.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-400" for this suite. • [SLOW TEST:8.074 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:21.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:21.639: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:21.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1832" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:21.649013 33 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 201 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0027b0750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003a60b60, 0xc0027b0750, 0xc003a60b60, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0027b0750, 0x479aa5ed6faeb2, 0xc0027b0778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x95, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004fb6c60, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0015c5560, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0015c5560, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000e0f3e8, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0027b16c0, 0xc0009ec4b0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0009ec4b0, 0x0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0009ec4b0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000c38b40, 0xc0009ec4b0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000c38b40, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000c38b40, 0xc002108030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f7974a780e8, 0xc00451dc80, 0x4c22012, 0x14, 0xc0045bd200, 0x3, 0x3, 0x5396840, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc00451dc80, 0x4c22012, 0x14, 0xc00462cfc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc00451dc80, 0x4c22012, 0x14, 0xc0045c95e0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00451dc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00451dc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00451dc80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:18.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:22.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2046" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":3,"skipped":315,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:22.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:22.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8205" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:15.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 May 7 20:50:15.946: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-1937" to be "Succeeded or Failed" May 7 20:50:15.947: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.695414ms May 7 20:50:17.952: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00587768s May 7 20:50:19.956: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010029301s May 7 20:50:21.959: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013642507s May 7 20:50:23.964: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018051837s May 7 20:50:23.964: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:24.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1937" for this suite. • [SLOW TEST:8.095 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:24.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:24.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6797" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":366,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:22.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:26.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-19" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:23.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:26.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2113" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 7 20:50:06.298: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.300: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:26.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5799" for this suite. • [SLOW TEST:20.086 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:774 ------------------------------ SSS ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:26.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:26.693: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:26.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-3733" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:26.704154 27 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 244 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00175c750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0012935e0, 0xc00175c750, 0xc0012935e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00175c750, 0x479aa71abec42a, 0xc00175c778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x93, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003d17ef0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000d52480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000d52480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000332a60, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00175d6c0, 0xc0037bb680, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0037bb680, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0037bb680, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001905680, 0xc0037bb680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001905680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001905680, 0xc003a19ce0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7fc7307f89b0, 0xc001987b00, 0x4c22012, 0x14, 0xc003303dd0, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001987b00, 0x4c22012, 0x14, 0xc002f8f300, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001987b00, 0x4c22012, 0x14, 0xc002d00c20, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001987b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001987b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001987b00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:26.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:26.901: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:26.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-5088" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:26.910441 27 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 244 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00175c750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00338a160, 0xc00175c750, 0xc00338a160, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00175c750, 0x479aa7270b6bde, 0xc00175c778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0xa6, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002a6b620, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000d52480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000d52480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000332a60, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00175d6c0, 0xc0037bb4a0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0037bb4a0, 0x0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0037bb4a0, 0x52e17e0, 0xc0001e08c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001905680, 0xc0037bb4a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001905680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001905680, 0xc003a19ce0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7fc7307f89b0, 0xc001987b00, 0x4c22012, 0x14, 0xc003303dd0, 0x3, 0x3, 0x5396840, 0xc0001e08c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc001987b00, 0x4c22012, 0x14, 0xc002f8f300, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc001987b00, 0x4c22012, 0x14, 0xc002d00c20, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001987b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001987b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001987b00, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:24.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:28.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7742" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:14.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container May 7 20:50:26.855: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1059 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 20:50:26.855: INFO: >>> kubeConfig: /root/.kube/config May 7 20:50:28.131: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1059 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 20:50:28.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 7 20:50:28.737: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1059 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 20:50:28.737: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:29.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1059" for this suite. • [SLOW TEST:14.290 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":4,"skipped":766,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:26.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:30.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7973" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":4,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:26.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 May 7 20:50:26.950: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-c95568f4-ecb0-4edf-8880-34d080b31402" in namespace "security-context-test-4265" to be "Succeeded or Failed" May 7 20:50:26.952: INFO: Pod "alpine-nnp-true-c95568f4-ecb0-4edf-8880-34d080b31402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635628ms May 7 20:50:28.957: INFO: Pod "alpine-nnp-true-c95568f4-ecb0-4edf-8880-34d080b31402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006844378s May 7 20:50:30.960: INFO: Pod "alpine-nnp-true-c95568f4-ecb0-4edf-8880-34d080b31402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010100742s May 7 20:50:30.960: INFO: Pod "alpine-nnp-true-c95568f4-ecb0-4edf-8880-34d080b31402" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:30.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4265" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:31.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 May 7 20:50:31.482: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:31.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2651" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0507 20:50:31.500017 33 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 201 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x42ea920, 0x753e830) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x42ea920, 0x753e830) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001920d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:185 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0027b0750, 0xcb4100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004b10820, 0xc0027b0750, 0xc004b10820, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0027b0750, 0x479aa8389aeec9, 0xc0027b0778) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x770c940, 0x95, 0x4f9037) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:184 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0045558f0, 0x25, 0x23, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:156 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:46 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0015c5560, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0015c5560, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000e0f3e8, 0x52e17e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0027b16c0, 0xc0009ec5a0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0009ec5a0, 0x0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0009ec5a0, 0x52e17e0, 0xc000190900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000c38b40, 0xc0009ec5a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000c38b40, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000c38b40, 0xc002108030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f7974a780e8, 0xc00451dc80, 0x4c22012, 0x14, 0xc0045bd200, 0x3, 0x3, 0x5396840, 0xc000190900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x52e6440, 0xc00451dc80, 0x4c22012, 0x14, 0xc00462cfc0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x52e6440, 0xc00451dc80, 0x4c22012, 0x14, 0xc0045c95e0, 0x2, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00451dc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00451dc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00451dc80, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 7 20:50:31.629: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:28.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 May 7 20:50:28.771: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34" in namespace "security-context-test-8796" to be "Succeeded or Failed" May 7 20:50:28.773: INFO: Pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1033ms May 7 20:50:30.777: INFO: Pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006611421s May 7 20:50:32.781: INFO: Pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009933585s May 7 20:50:34.785: INFO: Pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34": Phase="Failed", Reason="", readiness=false. Elapsed: 6.014387239s May 7 20:50:34.785: INFO: Pod "busybox-readonly-true-16cd40f6-7ad9-4710-9fa6-7193f24aca34" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:34.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8796" for this suite. • [SLOW TEST:6.055 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":694,"failed":0} May 7 20:50:34.796: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:29.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 May 7 20:50:29.160: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1" in namespace "security-context-test-508" to be "Succeeded or Failed" May 7 20:50:29.163: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354833ms May 7 20:50:31.165: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005039744s May 7 20:50:33.169: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008444099s May 7 20:50:35.172: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011251525s May 7 20:50:37.175: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014204534s May 7 20:50:37.175: INFO: Pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1" satisfied condition "Succeeded or Failed" May 7 20:50:37.183: INFO: Got logs for pod "busybox-privileged-true-c43b81f9-12a1-4f51-891d-e4baa1ab91f1": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:37.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-508" for this suite. • [SLOW TEST:8.067 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":778,"failed":0} May 7 20:50:37.197: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:31.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 7 20:50:31.045: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod May 7 20:50:31.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3191 create -f -' May 7 20:50:31.469: INFO: stderr: "" May 7 20:50:31.469: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 7 20:50:37.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3191 logs dapi-test-pod test-container' May 7 20:50:37.652: INFO: stderr: "" May 7 20:50:37.652: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3191\nMY_POD_IP=10.244.4.36\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 7 20:50:37.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3191 logs dapi-test-pod test-container' May 7 20:50:37.812: INFO: stderr: "" May 7 20:50:37.812: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3191\nMY_POD_IP=10.244.4.36\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:37.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3191" for this suite. • [SLOW TEST:6.802 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":3,"skipped":387,"failed":0} May 7 20:50:37.822: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:26.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 STEP: Creating pod liveness-6fb160d8-66da-429c-9031-ee8f93420b5d in namespace container-probe-3650 May 7 20:50:34.761: INFO: Started pod liveness-6fb160d8-66da-429c-9031-ee8f93420b5d in namespace container-probe-3650 STEP: checking the pod's current state and verifying that restartCount is present May 7 20:50:34.764: INFO: Initial restart count of pod liveness-6fb160d8-66da-429c-9031-ee8f93420b5d is 0 May 7 20:50:56.803: INFO: Restart count of pod container-probe-3650/liveness-6fb160d8-66da-429c-9031-ee8f93420b5d is now 1 (22.039336303s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:50:56.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3650" for this suite. • [SLOW TEST:30.098 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":722,"failed":0} May 7 20:50:56.826: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples May 7 20:50:06.795: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.796: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 May 7 20:50:06.805: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 May 7 20:50:06.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4317 create -f -' May 7 20:50:07.206: INFO: stderr: "" May 7 20:50:07.206: INFO: stdout: "pod/liveness-exec created\n" May 7 20:50:07.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4317 create -f -' May 7 20:50:07.473: INFO: stderr: "" May 7 20:50:07.473: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 7 20:50:13.481: INFO: Pod: liveness-http, restart count:0 May 7 20:50:15.483: INFO: Pod: liveness-http, restart count:0 May 7 20:50:17.481: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:17.486: INFO: Pod: liveness-http, restart count:0 May 7 20:50:19.484: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:19.489: INFO: Pod: liveness-http, restart count:0 May 7 20:50:21.487: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:21.492: INFO: Pod: liveness-http, restart count:0 May 7 20:50:23.489: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:23.495: INFO: Pod: liveness-http, restart count:0 May 7 20:50:25.492: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:25.498: INFO: Pod: liveness-http, restart count:0 May 7 20:50:27.494: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:27.501: INFO: Pod: liveness-http, restart count:0 May 7 20:50:29.497: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:29.504: INFO: Pod: liveness-http, restart count:0 May 7 20:50:31.500: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:31.506: INFO: Pod: liveness-http, restart count:0 May 7 20:50:33.504: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:33.508: INFO: Pod: liveness-http, restart count:0 May 7 20:50:35.507: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:35.511: INFO: Pod: liveness-http, restart count:0 May 7 20:50:37.509: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:37.514: INFO: Pod: liveness-http, restart count:0 May 7 20:50:39.513: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:39.517: INFO: Pod: liveness-http, restart count:0 May 7 20:50:41.516: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:41.520: INFO: Pod: liveness-http, restart count:0 May 7 20:50:43.520: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:43.522: INFO: Pod: liveness-http, restart count:0 May 7 20:50:45.523: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:45.525: INFO: Pod: liveness-http, restart count:0 May 7 20:50:47.527: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:47.527: INFO: Pod: liveness-http, restart count:0 May 7 20:50:49.530: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:49.530: INFO: Pod: liveness-http, restart count:0 May 7 20:50:51.535: INFO: Pod: liveness-http, restart count:0 May 7 20:50:51.535: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:53.538: INFO: Pod: liveness-http, restart count:0 May 7 20:50:53.539: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:55.541: INFO: Pod: liveness-http, restart count:1 May 7 20:50:55.541: INFO: Saw liveness-http restart, succeeded... May 7 20:50:55.541: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:57.546: INFO: Pod: liveness-exec, restart count:0 May 7 20:50:59.549: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:01.552: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:03.555: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:05.559: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:07.563: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:09.567: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:11.569: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:13.572: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:15.575: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:17.579: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:19.581: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:21.585: INFO: Pod: liveness-exec, restart count:0 May 7 20:51:23.589: INFO: Pod: liveness-exec, restart count:1 May 7 20:51:23.589: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:51:23.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4317" for this suite. • [SLOW TEST:76.822 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":309,"failed":0} May 7 20:51:23.600: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 7 20:50:06.560: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 20:50:06.562: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 STEP: Creating pod liveness-a3ac1af7-b6b3-4bdf-b65a-a0cd2f0a2473 in namespace container-probe-6679 May 7 20:50:14.582: INFO: Started pod liveness-a3ac1af7-b6b3-4bdf-b65a-a0cd2f0a2473 in namespace container-probe-6679 STEP: checking the pod's current state and verifying that restartCount is present May 7 20:50:14.584: INFO: Initial restart count of pod liveness-a3ac1af7-b6b3-4bdf-b65a-a0cd2f0a2473 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:54:15.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6679" for this suite. • [SLOW TEST:248.583 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":205,"failed":0} May 7 20:54:15.128: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:06.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready May 7 20:50:06.392: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 7 20:50:07.402: INFO: node status heartbeat is unchanged for 1.003034236s, waiting for 1m20s May 7 20:50:08.402: INFO: node status heartbeat is unchanged for 2.003737574s, waiting for 1m20s May 7 20:50:09.402: INFO: node status heartbeat is unchanged for 3.003557042s, waiting for 1m20s May 7 20:50:10.403: INFO: node status heartbeat is unchanged for 4.004066954s, waiting for 1m20s May 7 20:50:11.403: INFO: node status heartbeat is unchanged for 5.00453412s, waiting for 1m20s May 7 20:50:12.402: INFO: node status heartbeat is unchanged for 6.003219942s, waiting for 1m20s May 7 20:50:13.402: INFO: node status heartbeat is unchanged for 7.003329103s, waiting for 1m20s May 7 20:50:14.406: INFO: node status heartbeat is unchanged for 8.007716591s, waiting for 1m20s May 7 20:50:15.403: INFO: node status heartbeat is unchanged for 9.004192864s, waiting for 1m20s May 7 20:50:16.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:50:16.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "6f56c5a750d0441dba0ffa6273fb1a17", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "98b263e9-3136-45ed-9b07-5f5b6b9d69b8", KernelVersion: "3.10.0-1160.25.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 20 identical elements {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, + { + Names: []string{ + "busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", + "busybox:1.29", + }, + SizeBytes: 1154361, + }, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } May 7 20:50:17.402: INFO: node status heartbeat is unchanged for 999.72246ms, waiting for 1m20s May 7 20:50:18.402: INFO: node status heartbeat is unchanged for 1.99925214s, waiting for 1m20s May 7 20:50:19.403: INFO: node status heartbeat is unchanged for 3.000364026s, waiting for 1m20s May 7 20:50:20.404: INFO: node status heartbeat is unchanged for 4.00103945s, waiting for 1m20s May 7 20:50:21.403: INFO: node status heartbeat is unchanged for 5.000026475s, waiting for 1m20s May 7 20:50:22.403: INFO: node status heartbeat is unchanged for 5.999896375s, waiting for 1m20s May 7 20:50:23.402: INFO: node status heartbeat is unchanged for 6.999305715s, waiting for 1m20s May 7 20:50:24.402: INFO: node status heartbeat is unchanged for 7.999239232s, waiting for 1m20s May 7 20:50:25.403: INFO: node status heartbeat is unchanged for 9.000278587s, waiting for 1m20s May 7 20:50:26.401: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:50:26.404: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:50:27.404: INFO: node status heartbeat is unchanged for 1.002542729s, waiting for 1m20s May 7 20:50:28.403: INFO: node status heartbeat is unchanged for 2.001615212s, waiting for 1m20s May 7 20:50:29.403: INFO: node status heartbeat is unchanged for 3.00163487s, waiting for 1m20s May 7 20:50:30.404: INFO: node status heartbeat is unchanged for 4.002399795s, waiting for 1m20s May 7 20:50:31.402: INFO: node status heartbeat is unchanged for 5.000552151s, waiting for 1m20s May 7 20:50:32.402: INFO: node status heartbeat is unchanged for 6.000483886s, waiting for 1m20s May 7 20:50:33.402: INFO: node status heartbeat is unchanged for 7.00090172s, waiting for 1m20s May 7 20:50:34.402: INFO: node status heartbeat is unchanged for 8.001037981s, waiting for 1m20s May 7 20:50:35.403: INFO: node status heartbeat is unchanged for 9.00135718s, waiting for 1m20s May 7 20:50:36.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:50:36.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:50:37.402: INFO: node status heartbeat is unchanged for 1.00044088s, waiting for 1m20s May 7 20:50:38.402: INFO: node status heartbeat is unchanged for 2.000031719s, waiting for 1m20s May 7 20:50:39.404: INFO: node status heartbeat is unchanged for 3.001980869s, waiting for 1m20s May 7 20:50:40.402: INFO: node status heartbeat is unchanged for 4.000171234s, waiting for 1m20s May 7 20:50:41.403: INFO: node status heartbeat is unchanged for 5.000839374s, waiting for 1m20s May 7 20:50:42.403: INFO: node status heartbeat is unchanged for 6.001518609s, waiting for 1m20s May 7 20:50:43.402: INFO: node status heartbeat is unchanged for 7.000353639s, waiting for 1m20s May 7 20:50:44.402: INFO: node status heartbeat is unchanged for 8.000440085s, waiting for 1m20s May 7 20:50:45.403: INFO: node status heartbeat is unchanged for 9.001592437s, waiting for 1m20s May 7 20:50:46.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:50:46.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, NodeInfo: v1.NodeSystemInfo{MachineID: "6f56c5a750d0441dba0ffa6273fb1a17", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "98b263e9-3136-45ed-9b07-5f5b6b9d69b8", KernelVersion: "3.10.0-1160.25.1.el7.x86_64", OSImage: "CentOS Linux 7 (Core)", ContainerRuntimeVersion: "docker://19.3.14", KubeletVersion: "v1.19.8", KubeProxyVersion: "v1.19.8", OperatingSystem: "linux", Architecture: "amd64"}, Images: []v1.ContainerImage{ ... // 14 identical elements {Names: []string{"k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b", "k8s.gcr.io/kube-scheduler:v1.19.8"}, SizeBytes: 46510430}, {Names: []string{"localhost:30500/sriov-device-plugin@sha256:bae53f2ec899d23f9342d730c376a1ee3805e96fd1e5e4857e65085e6529557d", "localhost:30500/sriov-device-plugin:v3.3.1"}, SizeBytes: 44392820}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213", + "gcr.io/kubernetes-e2e-test-images/nonroot:1.0", + }, + SizeBytes: 42321438, + }, {Names: []string{"quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee", "quay.io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"localhost:30500/tas-controller@sha256:09461cf1b75776eb7d277a89d3a624c9eea355bf2ab1d8abbe45c40df99de268", "localhost:30500/tas-controller:0.1"}, SizeBytes: 22922439}, ... // 2 identical elements {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e", "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", + "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", + }, + SizeBytes: 6757579, + }, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", "busybox:1.28"}, SizeBytes: 1146369}, {Names: []string{"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa", "k8s.gcr.io/pause:3.3"}, SizeBytes: 682696}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } May 7 20:50:47.402: INFO: node status heartbeat is unchanged for 998.861204ms, waiting for 1m20s May 7 20:50:48.403: INFO: node status heartbeat is unchanged for 1.999649296s, waiting for 1m20s May 7 20:50:49.403: INFO: node status heartbeat is unchanged for 2.999704929s, waiting for 1m20s May 7 20:50:50.405: INFO: node status heartbeat is unchanged for 4.001463769s, waiting for 1m20s May 7 20:50:51.403: INFO: node status heartbeat is unchanged for 4.999695145s, waiting for 1m20s May 7 20:50:52.402: INFO: node status heartbeat is unchanged for 5.998628333s, waiting for 1m20s May 7 20:50:53.403: INFO: node status heartbeat is unchanged for 6.999598842s, waiting for 1m20s May 7 20:50:54.404: INFO: node status heartbeat is unchanged for 8.000581776s, waiting for 1m20s May 7 20:50:55.402: INFO: node status heartbeat is unchanged for 8.99920422s, waiting for 1m20s May 7 20:50:56.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:50:56.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:50:57.403: INFO: node status heartbeat is unchanged for 1.001124071s, waiting for 1m20s May 7 20:50:58.404: INFO: node status heartbeat is unchanged for 2.002368691s, waiting for 1m20s May 7 20:50:59.403: INFO: node status heartbeat is unchanged for 3.000991068s, waiting for 1m20s May 7 20:51:00.404: INFO: node status heartbeat is unchanged for 4.001913586s, waiting for 1m20s May 7 20:51:01.403: INFO: node status heartbeat is unchanged for 5.000745129s, waiting for 1m20s May 7 20:51:02.403: INFO: node status heartbeat is unchanged for 6.000648922s, waiting for 1m20s May 7 20:51:03.403: INFO: node status heartbeat is unchanged for 7.001012875s, waiting for 1m20s May 7 20:51:04.403: INFO: node status heartbeat is unchanged for 8.000933447s, waiting for 1m20s May 7 20:51:05.403: INFO: node status heartbeat is unchanged for 9.000542029s, waiting for 1m20s May 7 20:51:06.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:06.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:50:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:07.402: INFO: node status heartbeat is unchanged for 1.000295808s, waiting for 1m20s May 7 20:51:08.403: INFO: node status heartbeat is unchanged for 2.000849593s, waiting for 1m20s May 7 20:51:09.403: INFO: node status heartbeat is unchanged for 3.000630946s, waiting for 1m20s May 7 20:51:10.405: INFO: node status heartbeat is unchanged for 4.003012689s, waiting for 1m20s May 7 20:51:11.402: INFO: node status heartbeat is unchanged for 5.00031844s, waiting for 1m20s May 7 20:51:12.404: INFO: node status heartbeat is unchanged for 6.001718635s, waiting for 1m20s May 7 20:51:13.402: INFO: node status heartbeat is unchanged for 6.999543962s, waiting for 1m20s May 7 20:51:14.404: INFO: node status heartbeat is unchanged for 8.002206229s, waiting for 1m20s May 7 20:51:15.404: INFO: node status heartbeat is unchanged for 9.002194419s, waiting for 1m20s May 7 20:51:16.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:16.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:17.402: INFO: node status heartbeat is unchanged for 999.103395ms, waiting for 1m20s May 7 20:51:18.405: INFO: node status heartbeat is unchanged for 2.001854185s, waiting for 1m20s May 7 20:51:19.403: INFO: node status heartbeat is unchanged for 2.999772003s, waiting for 1m20s May 7 20:51:20.404: INFO: node status heartbeat is unchanged for 4.000504196s, waiting for 1m20s May 7 20:51:21.402: INFO: node status heartbeat is unchanged for 4.999355145s, waiting for 1m20s May 7 20:51:22.402: INFO: node status heartbeat is unchanged for 5.99890651s, waiting for 1m20s May 7 20:51:23.403: INFO: node status heartbeat is unchanged for 6.999713287s, waiting for 1m20s May 7 20:51:24.404: INFO: node status heartbeat is unchanged for 8.000436046s, waiting for 1m20s May 7 20:51:25.403: INFO: node status heartbeat is unchanged for 9.000397895s, waiting for 1m20s May 7 20:51:26.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:26.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:27.402: INFO: node status heartbeat is unchanged for 998.873074ms, waiting for 1m20s May 7 20:51:28.403: INFO: node status heartbeat is unchanged for 2.000302972s, waiting for 1m20s May 7 20:51:29.402: INFO: node status heartbeat is unchanged for 2.999388434s, waiting for 1m20s May 7 20:51:30.403: INFO: node status heartbeat is unchanged for 3.999950761s, waiting for 1m20s May 7 20:51:31.404: INFO: node status heartbeat is unchanged for 5.000693173s, waiting for 1m20s May 7 20:51:32.403: INFO: node status heartbeat is unchanged for 6.000252504s, waiting for 1m20s May 7 20:51:33.403: INFO: node status heartbeat is unchanged for 6.99977987s, waiting for 1m20s May 7 20:51:34.402: INFO: node status heartbeat is unchanged for 7.999110351s, waiting for 1m20s May 7 20:51:35.402: INFO: node status heartbeat is unchanged for 8.99929189s, waiting for 1m20s May 7 20:51:36.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:36.404: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:37.403: INFO: node status heartbeat is unchanged for 1.001379128s, waiting for 1m20s May 7 20:51:38.405: INFO: node status heartbeat is unchanged for 2.003102681s, waiting for 1m20s May 7 20:51:39.404: INFO: node status heartbeat is unchanged for 3.002879042s, waiting for 1m20s May 7 20:51:40.403: INFO: node status heartbeat is unchanged for 4.001454259s, waiting for 1m20s May 7 20:51:41.402: INFO: node status heartbeat is unchanged for 5.000661962s, waiting for 1m20s May 7 20:51:42.402: INFO: node status heartbeat is unchanged for 6.00077417s, waiting for 1m20s May 7 20:51:43.402: INFO: node status heartbeat is unchanged for 7.000755138s, waiting for 1m20s May 7 20:51:44.403: INFO: node status heartbeat is unchanged for 8.001254047s, waiting for 1m20s May 7 20:51:45.402: INFO: node status heartbeat is unchanged for 9.000657868s, waiting for 1m20s May 7 20:51:46.402: INFO: node status heartbeat is unchanged for 10.000753581s, waiting for 1m20s May 7 20:51:47.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:47.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:48.403: INFO: node status heartbeat is unchanged for 1.000219708s, waiting for 1m20s May 7 20:51:49.403: INFO: node status heartbeat is unchanged for 1.999713463s, waiting for 1m20s May 7 20:51:50.402: INFO: node status heartbeat is unchanged for 2.999038553s, waiting for 1m20s May 7 20:51:51.402: INFO: node status heartbeat is unchanged for 3.999097454s, waiting for 1m20s May 7 20:51:52.403: INFO: node status heartbeat is unchanged for 4.999556835s, waiting for 1m20s May 7 20:51:53.403: INFO: node status heartbeat is unchanged for 6.000392385s, waiting for 1m20s May 7 20:51:54.403: INFO: node status heartbeat is unchanged for 7.000164035s, waiting for 1m20s May 7 20:51:55.402: INFO: node status heartbeat is unchanged for 7.998631693s, waiting for 1m20s May 7 20:51:56.402: INFO: node status heartbeat is unchanged for 8.998475187s, waiting for 1m20s May 7 20:51:57.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:51:57.404: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:51:58.402: INFO: node status heartbeat is unchanged for 1.000049913s, waiting for 1m20s May 7 20:51:59.402: INFO: node status heartbeat is unchanged for 2.000131912s, waiting for 1m20s May 7 20:52:00.403: INFO: node status heartbeat is unchanged for 3.000874136s, waiting for 1m20s May 7 20:52:01.403: INFO: node status heartbeat is unchanged for 4.000843881s, waiting for 1m20s May 7 20:52:02.403: INFO: node status heartbeat is unchanged for 5.001080453s, waiting for 1m20s May 7 20:52:03.402: INFO: node status heartbeat is unchanged for 6.000133793s, waiting for 1m20s May 7 20:52:04.405: INFO: node status heartbeat is unchanged for 7.003507199s, waiting for 1m20s May 7 20:52:05.402: INFO: node status heartbeat is unchanged for 8.000038272s, waiting for 1m20s May 7 20:52:06.403: INFO: node status heartbeat is unchanged for 9.001254983s, waiting for 1m20s May 7 20:52:07.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:07.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:51:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:08.405: INFO: node status heartbeat is unchanged for 1.002259904s, waiting for 1m20s May 7 20:52:09.403: INFO: node status heartbeat is unchanged for 2.000600378s, waiting for 1m20s May 7 20:52:10.403: INFO: node status heartbeat is unchanged for 3.000261403s, waiting for 1m20s May 7 20:52:11.402: INFO: node status heartbeat is unchanged for 3.999893131s, waiting for 1m20s May 7 20:52:12.403: INFO: node status heartbeat is unchanged for 5.000694104s, waiting for 1m20s May 7 20:52:13.402: INFO: node status heartbeat is unchanged for 5.999683586s, waiting for 1m20s May 7 20:52:14.402: INFO: node status heartbeat is unchanged for 6.999359468s, waiting for 1m20s May 7 20:52:15.403: INFO: node status heartbeat is unchanged for 8.001124224s, waiting for 1m20s May 7 20:52:16.401: INFO: node status heartbeat is unchanged for 8.999143993s, waiting for 1m20s May 7 20:52:17.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:17.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:18.403: INFO: node status heartbeat is unchanged for 1.000421343s, waiting for 1m20s May 7 20:52:19.402: INFO: node status heartbeat is unchanged for 1.998950055s, waiting for 1m20s May 7 20:52:20.402: INFO: node status heartbeat is unchanged for 2.999650061s, waiting for 1m20s May 7 20:52:21.401: INFO: node status heartbeat is unchanged for 3.998640532s, waiting for 1m20s May 7 20:52:22.403: INFO: node status heartbeat is unchanged for 4.999751676s, waiting for 1m20s May 7 20:52:23.403: INFO: node status heartbeat is unchanged for 6.000557636s, waiting for 1m20s May 7 20:52:24.402: INFO: node status heartbeat is unchanged for 6.999066928s, waiting for 1m20s May 7 20:52:25.404: INFO: node status heartbeat is unchanged for 8.000711759s, waiting for 1m20s May 7 20:52:26.403: INFO: node status heartbeat is unchanged for 9.000202523s, waiting for 1m20s May 7 20:52:27.405: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:27.408: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:28.405: INFO: node status heartbeat is unchanged for 1.000378702s, waiting for 1m20s May 7 20:52:29.404: INFO: node status heartbeat is unchanged for 1.998809439s, waiting for 1m20s May 7 20:52:30.403: INFO: node status heartbeat is unchanged for 2.997818667s, waiting for 1m20s May 7 20:52:31.402: INFO: node status heartbeat is unchanged for 3.997755957s, waiting for 1m20s May 7 20:52:32.402: INFO: node status heartbeat is unchanged for 4.99757115s, waiting for 1m20s May 7 20:52:33.402: INFO: node status heartbeat is unchanged for 5.997341507s, waiting for 1m20s May 7 20:52:34.403: INFO: node status heartbeat is unchanged for 6.998394991s, waiting for 1m20s May 7 20:52:35.402: INFO: node status heartbeat is unchanged for 7.997673142s, waiting for 1m20s May 7 20:52:36.402: INFO: node status heartbeat is unchanged for 8.996833384s, waiting for 1m20s May 7 20:52:37.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:37.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:38.404: INFO: node status heartbeat is unchanged for 1.001664968s, waiting for 1m20s May 7 20:52:39.404: INFO: node status heartbeat is unchanged for 2.001440234s, waiting for 1m20s May 7 20:52:40.404: INFO: node status heartbeat is unchanged for 3.001483201s, waiting for 1m20s May 7 20:52:41.403: INFO: node status heartbeat is unchanged for 4.00046183s, waiting for 1m20s May 7 20:52:42.402: INFO: node status heartbeat is unchanged for 4.999477642s, waiting for 1m20s May 7 20:52:43.402: INFO: node status heartbeat is unchanged for 5.999343829s, waiting for 1m20s May 7 20:52:44.404: INFO: node status heartbeat is unchanged for 7.001444018s, waiting for 1m20s May 7 20:52:45.403: INFO: node status heartbeat is unchanged for 8.000494124s, waiting for 1m20s May 7 20:52:46.404: INFO: node status heartbeat is unchanged for 9.001658535s, waiting for 1m20s May 7 20:52:47.404: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:47.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:48.404: INFO: node status heartbeat is unchanged for 1.000747861s, waiting for 1m20s May 7 20:52:49.402: INFO: node status heartbeat is unchanged for 1.998541471s, waiting for 1m20s May 7 20:52:50.405: INFO: node status heartbeat is unchanged for 3.001700348s, waiting for 1m20s May 7 20:52:51.403: INFO: node status heartbeat is unchanged for 3.999779316s, waiting for 1m20s May 7 20:52:52.404: INFO: node status heartbeat is unchanged for 5.000681998s, waiting for 1m20s May 7 20:52:53.403: INFO: node status heartbeat is unchanged for 5.999649852s, waiting for 1m20s May 7 20:52:54.405: INFO: node status heartbeat is unchanged for 7.000984677s, waiting for 1m20s May 7 20:52:55.403: INFO: node status heartbeat is unchanged for 7.998984659s, waiting for 1m20s May 7 20:52:56.402: INFO: node status heartbeat is unchanged for 8.998404848s, waiting for 1m20s May 7 20:52:57.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:52:57.404: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:52:58.403: INFO: node status heartbeat is unchanged for 1.001161277s, waiting for 1m20s May 7 20:52:59.402: INFO: node status heartbeat is unchanged for 2.000625138s, waiting for 1m20s May 7 20:53:00.404: INFO: node status heartbeat is unchanged for 3.002507344s, waiting for 1m20s May 7 20:53:01.402: INFO: node status heartbeat is unchanged for 4.0005627s, waiting for 1m20s May 7 20:53:02.402: INFO: node status heartbeat is unchanged for 5.000761517s, waiting for 1m20s May 7 20:53:03.403: INFO: node status heartbeat is unchanged for 6.001420766s, waiting for 1m20s May 7 20:53:04.402: INFO: node status heartbeat is unchanged for 7.000792224s, waiting for 1m20s May 7 20:53:05.402: INFO: node status heartbeat is unchanged for 7.999950491s, waiting for 1m20s May 7 20:53:06.402: INFO: node status heartbeat is unchanged for 9.000590909s, waiting for 1m20s May 7 20:53:07.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:53:07.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:52:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:08.402: INFO: node status heartbeat is unchanged for 999.691597ms, waiting for 1m20s May 7 20:53:09.402: INFO: node status heartbeat is unchanged for 1.999861446s, waiting for 1m20s May 7 20:53:10.402: INFO: node status heartbeat is unchanged for 2.999547856s, waiting for 1m20s May 7 20:53:11.403: INFO: node status heartbeat is unchanged for 4.000912329s, waiting for 1m20s May 7 20:53:12.402: INFO: node status heartbeat is unchanged for 4.99955412s, waiting for 1m20s May 7 20:53:13.402: INFO: node status heartbeat is unchanged for 5.999459169s, waiting for 1m20s May 7 20:53:14.403: INFO: node status heartbeat is unchanged for 7.000641157s, waiting for 1m20s May 7 20:53:15.402: INFO: node status heartbeat is unchanged for 8.000080598s, waiting for 1m20s May 7 20:53:16.404: INFO: node status heartbeat is unchanged for 9.001760188s, waiting for 1m20s May 7 20:53:17.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:53:17.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:18.402: INFO: node status heartbeat is unchanged for 1.000457412s, waiting for 1m20s May 7 20:53:19.402: INFO: node status heartbeat is unchanged for 2.000244965s, waiting for 1m20s May 7 20:53:20.402: INFO: node status heartbeat is unchanged for 2.999987111s, waiting for 1m20s May 7 20:53:21.402: INFO: node status heartbeat is unchanged for 4.000445997s, waiting for 1m20s May 7 20:53:22.403: INFO: node status heartbeat is unchanged for 5.001112237s, waiting for 1m20s May 7 20:53:23.402: INFO: node status heartbeat is unchanged for 6.000100757s, waiting for 1m20s May 7 20:53:24.403: INFO: node status heartbeat is unchanged for 7.00150636s, waiting for 1m20s May 7 20:53:25.402: INFO: node status heartbeat is unchanged for 8.000591896s, waiting for 1m20s May 7 20:53:26.403: INFO: node status heartbeat is unchanged for 9.000668345s, waiting for 1m20s May 7 20:53:27.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:53:27.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:28.402: INFO: node status heartbeat is unchanged for 999.63037ms, waiting for 1m20s May 7 20:53:29.402: INFO: node status heartbeat is unchanged for 1.99933426s, waiting for 1m20s May 7 20:53:30.402: INFO: node status heartbeat is unchanged for 2.999499631s, waiting for 1m20s May 7 20:53:31.403: INFO: node status heartbeat is unchanged for 4.000676063s, waiting for 1m20s May 7 20:53:32.404: INFO: node status heartbeat is unchanged for 5.00156266s, waiting for 1m20s May 7 20:53:33.404: INFO: node status heartbeat is unchanged for 6.00135838s, waiting for 1m20s May 7 20:53:34.403: INFO: node status heartbeat is unchanged for 7.000840601s, waiting for 1m20s May 7 20:53:35.404: INFO: node status heartbeat is unchanged for 8.002010315s, waiting for 1m20s May 7 20:53:36.401: INFO: node status heartbeat is unchanged for 8.999054036s, waiting for 1m20s May 7 20:53:37.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:53:37.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:38.402: INFO: node status heartbeat is unchanged for 999.997888ms, waiting for 1m20s May 7 20:53:39.402: INFO: node status heartbeat is unchanged for 2.000129404s, waiting for 1m20s May 7 20:53:40.404: INFO: node status heartbeat is unchanged for 3.002200245s, waiting for 1m20s May 7 20:53:41.402: INFO: node status heartbeat is unchanged for 4.000230301s, waiting for 1m20s May 7 20:53:42.404: INFO: node status heartbeat is unchanged for 5.001903826s, waiting for 1m20s May 7 20:53:43.403: INFO: node status heartbeat is unchanged for 6.000894929s, waiting for 1m20s May 7 20:53:44.404: INFO: node status heartbeat is unchanged for 7.001968846s, waiting for 1m20s May 7 20:53:45.403: INFO: node status heartbeat is unchanged for 8.00086305s, waiting for 1m20s May 7 20:53:46.404: INFO: node status heartbeat is unchanged for 9.002223892s, waiting for 1m20s May 7 20:53:47.404: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:53:47.407: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:36 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:48.403: INFO: node status heartbeat is unchanged for 998.800859ms, waiting for 1m20s May 7 20:53:49.403: INFO: node status heartbeat is unchanged for 1.998416684s, waiting for 1m20s May 7 20:53:50.402: INFO: node status heartbeat is unchanged for 2.997604877s, waiting for 1m20s May 7 20:53:51.402: INFO: node status heartbeat is unchanged for 3.998148491s, waiting for 1m20s May 7 20:53:52.402: INFO: node status heartbeat is unchanged for 4.997935721s, waiting for 1m20s May 7 20:53:53.402: INFO: node status heartbeat is unchanged for 5.997873955s, waiting for 1m20s May 7 20:53:54.404: INFO: node status heartbeat is unchanged for 6.99937802s, waiting for 1m20s May 7 20:53:55.404: INFO: node status heartbeat is unchanged for 7.999677828s, waiting for 1m20s May 7 20:53:56.402: INFO: node status heartbeat is unchanged for 8.997518785s, waiting for 1m20s May 7 20:53:57.402: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 7 20:53:57.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:53:58.404: INFO: node status heartbeat is unchanged for 1.002468033s, waiting for 1m20s May 7 20:53:59.404: INFO: node status heartbeat is unchanged for 2.001901804s, waiting for 1m20s May 7 20:54:00.403: INFO: node status heartbeat is unchanged for 3.001327601s, waiting for 1m20s May 7 20:54:01.402: INFO: node status heartbeat is unchanged for 4.000523362s, waiting for 1m20s May 7 20:54:02.405: INFO: node status heartbeat is unchanged for 5.002979163s, waiting for 1m20s May 7 20:54:03.404: INFO: node status heartbeat is unchanged for 6.002363279s, waiting for 1m20s May 7 20:54:04.404: INFO: node status heartbeat is unchanged for 7.001940422s, waiting for 1m20s May 7 20:54:05.402: INFO: node status heartbeat is unchanged for 8.000602958s, waiting for 1m20s May 7 20:54:06.403: INFO: node status heartbeat is unchanged for 9.001540261s, waiting for 1m20s May 7 20:54:07.410: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:07.412: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:53:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:08.402: INFO: node status heartbeat is unchanged for 992.302504ms, waiting for 1m20s May 7 20:54:09.402: INFO: node status heartbeat is unchanged for 1.99202057s, waiting for 1m20s May 7 20:54:10.403: INFO: node status heartbeat is unchanged for 2.993734651s, waiting for 1m20s May 7 20:54:11.402: INFO: node status heartbeat is unchanged for 3.99275172s, waiting for 1m20s May 7 20:54:12.402: INFO: node status heartbeat is unchanged for 4.992539179s, waiting for 1m20s May 7 20:54:13.404: INFO: node status heartbeat is unchanged for 5.994065574s, waiting for 1m20s May 7 20:54:14.403: INFO: node status heartbeat is unchanged for 6.993797218s, waiting for 1m20s May 7 20:54:15.403: INFO: node status heartbeat is unchanged for 7.992941636s, waiting for 1m20s May 7 20:54:16.403: INFO: node status heartbeat is unchanged for 8.993504807s, waiting for 1m20s May 7 20:54:17.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:17.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:18.408: INFO: node status heartbeat is unchanged for 1.005156779s, waiting for 1m20s May 7 20:54:19.403: INFO: node status heartbeat is unchanged for 2.000446256s, waiting for 1m20s May 7 20:54:20.403: INFO: node status heartbeat is unchanged for 3.000184512s, waiting for 1m20s May 7 20:54:21.402: INFO: node status heartbeat is unchanged for 3.999154623s, waiting for 1m20s May 7 20:54:22.403: INFO: node status heartbeat is unchanged for 5.00021069s, waiting for 1m20s May 7 20:54:23.403: INFO: node status heartbeat is unchanged for 6.000452388s, waiting for 1m20s May 7 20:54:24.402: INFO: node status heartbeat is unchanged for 6.999525588s, waiting for 1m20s May 7 20:54:25.402: INFO: node status heartbeat is unchanged for 7.999880047s, waiting for 1m20s May 7 20:54:26.403: INFO: node status heartbeat is unchanged for 9.000133817s, waiting for 1m20s May 7 20:54:27.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:27.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:28.404: INFO: node status heartbeat is unchanged for 1.001946174s, waiting for 1m20s May 7 20:54:29.403: INFO: node status heartbeat is unchanged for 2.001270728s, waiting for 1m20s May 7 20:54:30.404: INFO: node status heartbeat is unchanged for 3.002070443s, waiting for 1m20s May 7 20:54:31.404: INFO: node status heartbeat is unchanged for 4.001785853s, waiting for 1m20s May 7 20:54:32.403: INFO: node status heartbeat is unchanged for 5.000716893s, waiting for 1m20s May 7 20:54:33.403: INFO: node status heartbeat is unchanged for 6.000918329s, waiting for 1m20s May 7 20:54:34.403: INFO: node status heartbeat is unchanged for 7.001174771s, waiting for 1m20s May 7 20:54:35.402: INFO: node status heartbeat is unchanged for 8.000526613s, waiting for 1m20s May 7 20:54:36.403: INFO: node status heartbeat is unchanged for 9.000823655s, waiting for 1m20s May 7 20:54:37.402: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:37.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:38.402: INFO: node status heartbeat is unchanged for 1.00022619s, waiting for 1m20s May 7 20:54:39.403: INFO: node status heartbeat is unchanged for 2.000595783s, waiting for 1m20s May 7 20:54:40.403: INFO: node status heartbeat is unchanged for 3.000347254s, waiting for 1m20s May 7 20:54:41.402: INFO: node status heartbeat is unchanged for 4.000092335s, waiting for 1m20s May 7 20:54:42.404: INFO: node status heartbeat is unchanged for 5.002158466s, waiting for 1m20s May 7 20:54:43.402: INFO: node status heartbeat is unchanged for 5.999530495s, waiting for 1m20s May 7 20:54:44.403: INFO: node status heartbeat is unchanged for 7.001012298s, waiting for 1m20s May 7 20:54:45.403: INFO: node status heartbeat is unchanged for 8.000543928s, waiting for 1m20s May 7 20:54:46.402: INFO: node status heartbeat is unchanged for 9.000069676s, waiting for 1m20s May 7 20:54:47.403: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:47.405: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:48.403: INFO: node status heartbeat is unchanged for 1.000134159s, waiting for 1m20s May 7 20:54:49.403: INFO: node status heartbeat is unchanged for 2.000788144s, waiting for 1m20s May 7 20:54:50.403: INFO: node status heartbeat is unchanged for 3.000601909s, waiting for 1m20s May 7 20:54:51.402: INFO: node status heartbeat is unchanged for 3.999878688s, waiting for 1m20s May 7 20:54:52.404: INFO: node status heartbeat is unchanged for 5.00128594s, waiting for 1m20s May 7 20:54:53.402: INFO: node status heartbeat is unchanged for 5.998983823s, waiting for 1m20s May 7 20:54:54.403: INFO: node status heartbeat is unchanged for 7.000462355s, waiting for 1m20s May 7 20:54:55.403: INFO: node status heartbeat is unchanged for 8.00016899s, waiting for 1m20s May 7 20:54:56.402: INFO: node status heartbeat is unchanged for 8.999808855s, waiting for 1m20s May 7 20:54:57.404: INFO: node status heartbeat is unchanged for 10.000893963s, waiting for 1m20s May 7 20:54:58.404: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 7 20:54:58.406: INFO: v1.NodeStatus{ Capacity: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 80}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 450471260160}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 201269633024}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Allocatable: v1.ResourceList{s"cmk.intel.com/exclusive-cores": {i: resource.int64Amount{value: 3}, s: "3", Format: "DecimalSI"}, s"cpu": {i: resource.int64Amount{value: 77}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: resource.int64Amount{value: 405424133473}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"}, s"intel.com/intel_sriov_netdevice": {i: resource.int64Amount{value: 4}, s: "4", Format: "DecimalSI"}, s"memory": {i: resource.int64Amount{value: 178884632576}, Format: "BinarySI"}, s"pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:05:02 +0000 UTC"}, Reason: "FlannelIsUp", Message: "Flannel is running on this node"}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-05-07 20:54:57 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2021-05-07 20:01:25 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2021-05-07 20:02:07 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"}, }, Addresses: []v1.NodeAddress{{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}}, ... // 5 identical fields } May 7 20:54:59.403: INFO: node status heartbeat is unchanged for 999.926729ms, waiting for 1m20s May 7 20:55:00.403: INFO: node status heartbeat is unchanged for 1.999529761s, waiting for 1m20s May 7 20:55:01.404: INFO: node status heartbeat is unchanged for 3.000215732s, waiting for 1m20s May 7 20:55:02.402: INFO: node status heartbeat is unchanged for 3.998463407s, waiting for 1m20s May 7 20:55:03.403: INFO: node status heartbeat is unchanged for 4.99932347s, waiting for 1m20s May 7 20:55:04.402: INFO: node status heartbeat is unchanged for 5.998573143s, waiting for 1m20s May 7 20:55:05.403: INFO: node status heartbeat is unchanged for 6.999323011s, waiting for 1m20s May 7 20:55:06.403: INFO: node status heartbeat is unchanged for 7.999974151s, waiting for 1m20s May 7 20:55:06.406: INFO: node status heartbeat is unchanged for 8.00237164s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:55:06.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2818" for this suite. • [SLOW TEST:300.054 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:14.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 STEP: getting restart delay-0 May 7 20:52:08.163: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-05-07 20:51:24 +0000 UTC restartedAt=2021-05-07 20:52:06 +0000 UTC (42s) STEP: getting restart delay-1 May 7 20:53:37.482: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-05-07 20:52:11 +0000 UTC restartedAt=2021-05-07 20:53:35 +0000 UTC (1m24s) STEP: getting restart delay-2 May 7 20:56:25.114: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-05-07 20:53:40 +0000 UTC restartedAt=2021-05-07 20:56:24 +0000 UTC (2m44s) STEP: updating the image May 7 20:56:25.622: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 7 20:56:53.689: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-07 20:56:36 +0000 UTC restartedAt=2021-05-07 20:56:52 +0000 UTC (16s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 20:56:53.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9562" for this suite. • [SLOW TEST:398.764 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:678 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":358,"failed":0} May 7 20:56:53.704: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 7 20:50:13.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 STEP: getting restart delay when capped May 7 21:01:56.073: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-05-07 20:56:44 +0000 UTC restartedAt=2021-05-07 21:01:54 +0000 UTC (5m10s) May 7 21:07:09.273: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-07 21:01:59 +0000 UTC restartedAt=2021-05-07 21:07:08 +0000 UTC (5m9s) May 7 21:12:19.418: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-05-07 21:07:13 +0000 UTC restartedAt=2021-05-07 21:12:18 +0000 UTC (5m5s) STEP: getting restart delay after a capped delay May 7 21:17:37.791: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-05-07 21:12:23 +0000 UTC restartedAt=2021-05-07 21:17:35 +0000 UTC (5m12s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 7 21:17:37.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3367" for this suite. • [SLOW TEST:1644.144 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:719 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":215,"failed":0} May 7 21:17:37.803: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":1,"skipped":138,"failed":0} May 7 20:55:06.433: INFO: Running AfterSuite actions on all nodes May 7 21:17:37.833: INFO: Running AfterSuite actions on node 1 May 7 21:17:37.833: INFO: Skipping dumping logs from cluster Ran 30 of 5484 Specs in 1652.012 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 27m33.438922696s Test Suite Passed