Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635568137 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 04:28:59.469: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.475: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 04:28:59.501: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 04:28:59.559: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 04:28:59.559: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 04:28:59.559: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 04:28:59.559: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 04:28:59.559: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 04:28:59.575: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 04:28:59.575: INFO: e2e test version: v1.21.5 Oct 30 04:28:59.576: INFO: kube-apiserver version: v1.21.1 Oct 30 04:28:59.577: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.582: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Oct 30 04:28:59.580: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.601: INFO: Cluster IP family: ipv4 Oct 30 04:28:59.581: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.602: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 04:28:59.596: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.619: INFO: Cluster IP family: ipv4 Oct 30 04:28:59.600: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.621: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 30 04:28:59.602: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.622: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 30 04:28:59.601: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.623: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Oct 30 04:28:59.605: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.627: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Oct 30 04:28:59.613: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.634: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 04:28:59.623: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:28:59.644: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector W1030 04:28:59.656882 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.657: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.658: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Oct 30 04:28:59.660: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:28:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-1044" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1030 04:28:59.887479 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.887: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.889: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-2053/configmap-test-a6da6697-f2f1-410b-bd70-0f4eea777d5d STEP: Updating configMap configmap-2053/configmap-test-a6da6697-f2f1-410b-bd70-0f4eea777d5d STEP: Verifying update of ConfigMap configmap-2053/configmap-test-a6da6697-f2f1-410b-bd70-0f4eea777d5d [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:28:59.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2053" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1030 04:28:59.726485 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.726: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.728: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 04:29:03.761: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:03.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9392" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":20,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:00.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 04:29:00.426: INFO: Waiting up to 5m0s for pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee" in namespace "security-context-1798" to be "Succeeded or Failed" Oct 30 04:29:00.429: INFO: Pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154577ms Oct 30 04:29:02.432: INFO: Pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005564215s Oct 30 04:29:04.436: INFO: Pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009203036s Oct 30 04:29:06.442: INFO: Pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015331847s STEP: Saw pod success Oct 30 04:29:06.442: INFO: Pod "security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee" satisfied condition "Succeeded or Failed" Oct 30 04:29:06.444: INFO: Trying to get logs from node node1 pod security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee container test-container: STEP: delete the pod Oct 30 04:29:06.454: INFO: Waiting for pod security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee to disappear Oct 30 04:29:06.457: INFO: Pod security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1798" for this suite. • [SLOW TEST:6.069 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":313,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1030 04:28:59.891793 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.892: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.893: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1030 04:29:03.921744 28 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 295 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002482f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002908c40, 0xc002482f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000055218, 0xc002908c40, 0xc004b3dbc0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000055218, 0xc002908c40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000055218, 0xc002908c40, 0xc000055218, 0xc002908c40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002908c40, 0x14, 0xc004c35230) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc004b41ce0, 0xc001008e10, 0x14, 0xc004c35230, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00116f980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00116f980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0011830e0, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003b272c0, 0x0, 0x768f9a0, 0xc000164840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003b272c0, 0x768f9a0, 0xc000164840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003a23cc0, 0xc003b272c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003a23cc0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003a23cc0, 0xc00479b8f8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000170280, 0x7f8f79f678c0, 0xc0004c3e00, 0x6f05d9d, 0x14, 0xc0022e6270, 0x3, 0x3, 0x7745ab8, 0xc000164840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc0004c3e00, 0x6f05d9d, 0x14, 0xc003523a40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc0004c3e00, 0x6f05d9d, 0x14, 0xc001991fa0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004c3e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0004c3e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0004c3e00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-9548". STEP: Found 2 events. Oct 30 04:29:03.924: INFO: At 2021-10-30 04:28:59 +0000 UTC - event for startup-e32fed27-0682-4b82-a851-bcc8c3e0a797: {default-scheduler } Scheduled: Successfully assigned container-probe-9548/startup-e32fed27-0682-4b82-a851-bcc8c3e0a797 to node2 Oct 30 04:29:03.924: INFO: At 2021-10-30 04:29:03 +0000 UTC - event for startup-e32fed27-0682-4b82-a851-bcc8c3e0a797: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Oct 30 04:29:03.926: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 04:29:03.926: INFO: startup-e32fed27-0682-4b82-a851-bcc8c3e0a797 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:28:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:28:59 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:28:59 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 04:28:59 +0000 UTC }] Oct 30 04:29:03.926: INFO: Oct 30 04:29:03.932: INFO: Logging node info for node master1 Oct 30 04:29:03.937: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 159174 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:29:00 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:29:00 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:29:00 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:29:00 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:29:03.938: INFO: Logging kubelet events for node master1 Oct 30 04:29:03.940: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 04:29:03.972: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:03.972: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:29:03.972: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 04:29:03.972: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:29:03.972: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container coredns ready: true, restart count 1 Oct 30 04:29:03.972: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:03.972: INFO: Container docker-registry ready: true, restart count 0 Oct 30 04:29:03.972: INFO: Container nginx ready: true, restart count 0 Oct 30 04:29:03.972: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 04:29:03.972: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 04:29:03.972: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Init container install-cni ready: true, restart count 0 Oct 30 04:29:03.972: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:29:03.972: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:03.972: INFO: Container kube-multus ready: true, restart count 1 W1030 04:29:03.986267 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:29:04.258: INFO: Latency metrics for node master1 Oct 30 04:29:04.258: INFO: Logging node info for node master2 Oct 30 04:29:04.261: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 159061 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:29:04.262: INFO: Logging kubelet events for node master2 Oct 30 04:29:04.264: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 04:29:04.288: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:04.288: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:29:04.288: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 04:29:04.288: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 04:29:04.288: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 04:29:04.288: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 04:29:04.288: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:29:04.288: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 04:29:04.288: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.288: INFO: Container kube-multus ready: true, restart count 1 W1030 04:29:04.302567 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:29:04.367: INFO: Latency metrics for node master2 Oct 30 04:29:04.367: INFO: Logging node info for node master3 Oct 30 04:29:04.370: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 159058 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:29:04.370: INFO: Logging kubelet events for node master3 Oct 30 04:29:04.372: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 04:29:04.388: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.388: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:29:04.388: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:29:04.388: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:29:04.388: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:29:04.388: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.388: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:29:04.388: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.388: INFO: Container coredns ready: true, restart count 1 Oct 30 04:29:04.388: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:04.388: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:04.388: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 04:29:04.388: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:04.388: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:04.388: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:29:04.388: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.388: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 04:29:04.389: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.389: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 04:29:04.389: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.389: INFO: Container autoscaler ready: true, restart count 1 Oct 30 04:29:04.389: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.389: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 04:29:04.389: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.389: INFO: Container kube-apiserver ready: true, restart count 0 W1030 04:29:04.401715 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:29:04.475: INFO: Latency metrics for node master3 Oct 30 04:29:04.475: INFO: Logging node info for node node1 Oct 30 04:29:04.478: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 159057 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 04:01:56 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:28:55 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:29:04.479: INFO: Logging kubelet events for node node1 Oct 30 04:29:04.481: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 04:29:04.502: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:29:04.502: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 04:29:04.502: INFO: Container discover ready: false, restart count 0 Oct 30 04:29:04.502: INFO: Container init ready: false, restart count 0 Oct 30 04:29:04.502: INFO: Container install ready: false, restart count 0 Oct 30 04:29:04.502: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:04.502: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:29:04.502: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:29:04.502: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:29:04.502: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:29:04.502: INFO: pod-ready started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container pod-readiness-gate ready: true, restart count 0 Oct 30 04:29:04.502: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 04:29:04.502: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:04.502: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:04.502: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:29:04.502: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 04:29:04.502: INFO: Container config-reloader ready: true, restart count 0 Oct 30 04:29:04.502: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 04:29:04.502: INFO: Container grafana ready: true, restart count 0 Oct 30 04:29:04.502: INFO: Container prometheus ready: true, restart count 1 Oct 30 04:29:04.502: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:29:04.502: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:29:04.502: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:29:04.502: INFO: security-context-5e69d88b-6f11-463a-b1ed-81f535cdf9ee started at 2021-10-30 04:29:00 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.502: INFO: Container test-container ready: false, restart count 0 Oct 30 04:29:04.503: INFO: implicit-nonroot-uid started at 2021-10-30 04:29:03 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.503: INFO: Container implicit-nonroot-uid ready: false, restart count 0 Oct 30 04:29:04.503: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:04.503: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:29:04.503: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 04:29:04.503: INFO: Container collectd ready: true, restart count 0 Oct 30 04:29:04.503: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:29:04.503: INFO: Container rbac-proxy ready: true, restart count 0 W1030 04:29:04.516825 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:29:05.074: INFO: Latency metrics for node node1 Oct 30 04:29:05.074: INFO: Logging node info for node node2 Oct 30 04:29:05.076: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 159063 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 04:01:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:28:56 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 04:29:05.077: INFO: Logging kubelet events for node node2 Oct 30 04:29:05.079: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 04:29:05.495: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:29:05.495: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 04:29:05.495: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 04:29:05.495: INFO: Container discover ready: false, restart count 0 Oct 30 04:29:05.495: INFO: Container init ready: false, restart count 0 Oct 30 04:29:05.495: INFO: Container install ready: false, restart count 0 Oct 30 04:29:05.495: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:29:05.495: INFO: startup-db3cbb95-e47d-44a4-8daa-43ba43189e82 started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container busybox ready: false, restart count 0 Oct 30 04:29:05.495: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:05.495: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:29:05.495: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:29:05.495: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 04:29:05.495: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:29:05.495: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:29:05.495: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container tas-extender ready: true, restart count 0 Oct 30 04:29:05.495: INFO: downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container dapi-container ready: false, restart count 0 Oct 30 04:29:05.495: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 04:29:05.495: INFO: Container collectd ready: true, restart count 0 Oct 30 04:29:05.495: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:29:05.495: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:29:05.495: INFO: startup-e32fed27-0682-4b82-a851-bcc8c3e0a797 started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container busybox ready: false, restart count 0 Oct 30 04:29:05.495: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:29:05.495: INFO: security-context-58cfd584-1983-4b74-954e-e109d15a3710 started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.495: INFO: Container test-container ready: false, restart count 0 Oct 30 04:29:05.496: INFO: busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088 started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088 ready: false, restart count 0 Oct 30 04:29:05.496: INFO: liveness-ad746852-ba6c-4012-99ab-6491fe19dbf5 started at 2021-10-30 04:28:59 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container agnhost-container ready: false, restart count 0 Oct 30 04:29:05.496: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:29:05.496: INFO: secret-test-pod started at 2021-10-30 04:29:00 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container test-container ready: false, restart count 0 Oct 30 04:29:05.496: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Init container install-cni ready: true, restart count 2 Oct 30 04:29:05.496: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 04:29:05.496: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:29:05.496: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 04:29:05.496: INFO: Container cmk-webhook ready: true, restart count 0 W1030 04:29:05.517355 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 04:29:06.532: INFO: Latency metrics for node node2 Oct 30 04:29:06.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9548" for this suite. •! Panic [6.675 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002482f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002908c40, 0xc002482f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000055218, 0xc002908c40, 0xc004b3dbc0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000055218, 0xc002908c40, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000055218, 0xc002908c40, 0xc000055218, 0xc002908c40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc002908c40, 0x14, 0xc004c35230) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc004b41ce0, 0xc001008e10, 0x14, 0xc004c35230, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004c3e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0004c3e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0004c3e00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1030 04:28:59.655126 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.655: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.658: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:28:59.672: INFO: Waiting up to 5m0s for pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710" in namespace "security-context-9372" to be "Succeeded or Failed" Oct 30 04:28:59.674: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071994ms Oct 30 04:29:01.678: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006004936s Oct 30 04:29:03.682: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009978071s Oct 30 04:29:05.686: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013599088s Oct 30 04:29:07.690: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017853294s STEP: Saw pod success Oct 30 04:29:07.690: INFO: Pod "security-context-58cfd584-1983-4b74-954e-e109d15a3710" satisfied condition "Succeeded or Failed" Oct 30 04:29:07.693: INFO: Trying to get logs from node node2 pod security-context-58cfd584-1983-4b74-954e-e109d15a3710 container test-container: STEP: delete the pod Oct 30 04:29:07.705: INFO: Waiting for pod security-context-58cfd584-1983-4b74-954e-e109d15a3710 to disappear Oct 30 04:29:07.707: INFO: Pod security-context-58cfd584-1983-4b74-954e-e109d15a3710 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:07.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9372" for this suite. • [SLOW TEST:8.089 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:03.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Oct 30 04:29:03.822: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-4115" to be "Succeeded or Failed" Oct 30 04:29:03.824: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 1.915431ms Oct 30 04:29:05.827: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005069548s Oct 30 04:29:07.830: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00857663s Oct 30 04:29:07.830: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:07.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4115" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1030 04:28:59.843025 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.843: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.845: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Oct 30 04:28:59.858: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088" in namespace "security-context-test-529" to be "Succeeded or Failed" Oct 30 04:28:59.859: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088": Phase="Pending", Reason="", readiness=false. Elapsed: 1.936479ms Oct 30 04:29:01.862: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004845313s Oct 30 04:29:03.866: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008931677s Oct 30 04:29:05.870: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012933408s Oct 30 04:29:07.873: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088": Phase="Failed", Reason="", readiness=false. Elapsed: 8.015257573s Oct 30 04:29:07.873: INFO: Pod "busybox-readonly-true-39173504-f5b8-4c20-85f0-7e194dfa1088" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:07.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-529" for this suite. • [SLOW TEST:8.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1030 04:28:59.896917 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.897: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.898: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Oct 30 04:28:59.915: INFO: Waiting up to 5m0s for pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b" in namespace "downward-api-9478" to be "Succeeded or Failed" Oct 30 04:28:59.917: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171053ms Oct 30 04:29:01.920: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004634355s Oct 30 04:29:03.922: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007490817s Oct 30 04:29:05.927: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011691415s Oct 30 04:29:07.930: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01464851s STEP: Saw pod success Oct 30 04:29:07.930: INFO: Pod "downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b" satisfied condition "Succeeded or Failed" Oct 30 04:29:07.932: INFO: Trying to get logs from node node2 pod downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b container dapi-container: STEP: delete the pod Oct 30 04:29:07.948: INFO: Waiting for pod downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b to disappear Oct 30 04:29:07.950: INFO: Pod downward-api-2c410c8e-253b-4857-a09e-93582a06dc4b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:07.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9478" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:08.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Oct 30 04:29:08.303: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:08.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-5597" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W1030 04:28:59.858624 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.858: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.860: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 30 04:28:59.870: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Oct 30 04:28:59.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8800 create -f -' Oct 30 04:29:00.395: INFO: stderr: "" Oct 30 04:29:00.395: INFO: stdout: "secret/test-secret created\n" Oct 30 04:29:00.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8800 create -f -' Oct 30 04:29:00.743: INFO: stderr: "" Oct 30 04:29:00.743: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Oct 30 04:29:08.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8800 logs secret-test-pod test-container' Oct 30 04:29:08.904: INFO: stderr: "" Oct 30 04:29:08.904: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:08.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-8800" for this suite. • [SLOW TEST:9.074 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:09.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Oct 30 04:29:09.065: INFO: Waiting up to 5m0s for pod "security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0" in namespace "security-context-5823" to be "Succeeded or Failed" Oct 30 04:29:09.067: INFO: Pod "security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22777ms Oct 30 04:29:11.070: INFO: Pod "security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005000573s Oct 30 04:29:13.073: INFO: Pod "security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008392751s STEP: Saw pod success Oct 30 04:29:13.073: INFO: Pod "security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0" satisfied condition "Succeeded or Failed" Oct 30 04:29:13.076: INFO: Trying to get logs from node node1 pod security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0 container test-container: STEP: delete the pod Oct 30 04:29:13.195: INFO: Waiting for pod security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0 to disappear Oct 30 04:29:13.197: INFO: Pod security-context-f5da357a-8d1d-48ec-ad80-1f5b7658ecc0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5823" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":2,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:07.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Oct 30 04:29:07.906: INFO: Waiting up to 5m0s for pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57" in namespace "security-context-test-7987" to be "Succeeded or Failed" Oct 30 04:29:07.908: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156907ms Oct 30 04:29:09.911: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004918026s Oct 30 04:29:11.915: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009410352s Oct 30 04:29:13.921: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014727384s Oct 30 04:29:15.924: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017755363s Oct 30 04:29:15.924: INFO: Pod "busybox-user-0-b0302f62-d7a6-4d8c-8a49-16e9f4c61f57" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7987" for this suite. • [SLOW TEST:8.059 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":38,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:15.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1888" for this suite. • [SLOW TEST:16.078 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":1,"skipped":54,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:07.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 04:29:07.924: INFO: Waiting up to 5m0s for pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e" in namespace "security-context-6650" to be "Succeeded or Failed" Oct 30 04:29:07.926: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595288ms Oct 30 04:29:09.930: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005815787s Oct 30 04:29:11.933: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008636755s Oct 30 04:29:13.936: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011889802s Oct 30 04:29:15.939: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01554922s STEP: Saw pod success Oct 30 04:29:15.939: INFO: Pod "security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e" satisfied condition "Succeeded or Failed" Oct 30 04:29:15.943: INFO: Trying to get logs from node node2 pod security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e container test-container: STEP: delete the pod Oct 30 04:29:15.953: INFO: Waiting for pod security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e to disappear Oct 30 04:29:15.955: INFO: Pod security-context-6006b9e0-9156-4795-8394-c4f5b752ed3e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:15.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6650" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:16.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:16.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-370" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:08.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Oct 30 04:29:08.114: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3871" to be "Succeeded or Failed" Oct 30 04:29:08.117: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.979435ms Oct 30 04:29:10.121: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006916401s Oct 30 04:29:12.125: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010954703s Oct 30 04:29:14.128: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014528673s Oct 30 04:29:16.133: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018677054s Oct 30 04:29:16.133: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3871" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:14.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Oct 30 04:29:14.090: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:16.093: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:18.095: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Oct 30 04:29:18.098: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5100 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:18.098: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:18.191: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-5100 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:18.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Oct 30 04:29:18.312: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5100 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:18.312: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:18.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-5100" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":3,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:16.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:20.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7425" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:16.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:22.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3772" for this suite. • [SLOW TEST:6.068 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:18.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Oct 30 04:29:18.835: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f" in namespace "security-context-test-6050" to be "Succeeded or Failed" Oct 30 04:29:18.837: INFO: Pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257646ms Oct 30 04:29:20.842: INFO: Pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006749708s Oct 30 04:29:22.846: INFO: Pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01075378s Oct 30 04:29:22.846: INFO: Pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f" satisfied condition "Succeeded or Failed" Oct 30 04:29:22.922: INFO: Got logs for pod "busybox-privileged-true-926dc34c-9ea9-4de2-8eb8-637e117f4d9f": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:22.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6050" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":4,"skipped":794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:06.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Oct 30 04:29:17.863: INFO: start=2021-10-30 04:29:12.845447989 +0000 UTC m=+14.987185318, now=2021-10-30 04:29:17.863605244 +0000 UTC m=+20.005342624, kubelet pod: {"metadata":{"name":"pod-submit-remove-9dd0396f-ebc0-47b4-b812-aa226a8d1c1f","namespace":"pods-7107","uid":"6fe9170b-f35f-4d4e-9701-7e48f7c9d083","resourceVersion":"159304","creationTimestamp":"2021-10-30T04:29:06Z","deletionTimestamp":"2021-10-30T04:29:42Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"795481509"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.248\"\n ],\n \"mac\": \"9e:b4:3f:98:7d:20\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.248\"\n ],\n \"mac\": \"9e:b4:3f:98:7d:20\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-10-30T04:29:06.817682517Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-30T04:29:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-6wd7c","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-6wd7c","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:06Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:15Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:15Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:06Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.248","podIPs":[{"ip":"10.244.3.248"}],"startTime":"2021-10-30T04:29:06Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-30T04:29:10Z","finishedAt":"2021-10-30T04:29:14Z","containerID":"docker://33790b0b108a39fa3d7623911bcbc82bbc1d3d3b079a43de7facb2cdfa41bed0"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://33790b0b108a39fa3d7623911bcbc82bbc1d3d3b079a43de7facb2cdfa41bed0","started":false}],"qosClass":"BestEffort"}} Oct 30 04:29:23.381: INFO: start=2021-10-30 04:29:12.845447989 +0000 UTC m=+14.987185318, now=2021-10-30 04:29:23.381719671 +0000 UTC m=+25.523457031, kubelet pod: {"metadata":{"name":"pod-submit-remove-9dd0396f-ebc0-47b4-b812-aa226a8d1c1f","namespace":"pods-7107","uid":"6fe9170b-f35f-4d4e-9701-7e48f7c9d083","resourceVersion":"159304","creationTimestamp":"2021-10-30T04:29:06Z","deletionTimestamp":"2021-10-30T04:29:42Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"795481509"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.248\"\n ],\n \"mac\": \"9e:b4:3f:98:7d:20\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.248\"\n ],\n \"mac\": \"9e:b4:3f:98:7d:20\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-10-30T04:29:06.817682517Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-30T04:29:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-6wd7c","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-6wd7c","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:06Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:15Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:15Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-30T04:29:06Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.248","podIPs":[{"ip":"10.244.3.248"}],"startTime":"2021-10-30T04:29:06Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-30T04:29:10Z","finishedAt":"2021-10-30T04:29:14Z","containerID":"docker://33790b0b108a39fa3d7623911bcbc82bbc1d3d3b079a43de7facb2cdfa41bed0"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://33790b0b108a39fa3d7623911bcbc82bbc1d3d3b079a43de7facb2cdfa41bed0","started":false}],"qosClass":"BestEffort"}} Oct 30 04:29:27.865: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:27.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7107" for this suite. • [SLOW TEST:21.100 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":1,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:23.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:29:23.242: INFO: Waiting up to 5m0s for pod "security-context-16fad013-8517-494c-83e3-26add8aa012f" in namespace "security-context-495" to be "Succeeded or Failed" Oct 30 04:29:23.244: INFO: Pod "security-context-16fad013-8517-494c-83e3-26add8aa012f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.849374ms Oct 30 04:29:25.246: INFO: Pod "security-context-16fad013-8517-494c-83e3-26add8aa012f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004519096s Oct 30 04:29:27.251: INFO: Pod "security-context-16fad013-8517-494c-83e3-26add8aa012f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009597434s Oct 30 04:29:29.255: INFO: Pod "security-context-16fad013-8517-494c-83e3-26add8aa012f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013190467s STEP: Saw pod success Oct 30 04:29:29.255: INFO: Pod "security-context-16fad013-8517-494c-83e3-26add8aa012f" satisfied condition "Succeeded or Failed" Oct 30 04:29:29.257: INFO: Trying to get logs from node node2 pod security-context-16fad013-8517-494c-83e3-26add8aa012f container test-container: STEP: delete the pod Oct 30 04:29:29.270: INFO: Waiting for pod security-context-16fad013-8517-494c-83e3-26add8aa012f to disappear Oct 30 04:29:29.271: INFO: Pod security-context-16fad013-8517-494c-83e3-26add8aa012f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:29.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-495" for this suite. • [SLOW TEST:6.068 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":5,"skipped":943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:22.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 30 04:29:22.809: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Oct 30 04:29:22.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8180 create -f -' Oct 30 04:29:23.267: INFO: stderr: "" Oct 30 04:29:23.267: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Oct 30 04:29:29.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8180 logs dapi-test-pod test-container' Oct 30 04:29:29.449: INFO: stderr: "" Oct 30 04:29:29.449: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8180\nMY_POD_IP=10.244.4.177\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Oct 30 04:29:29.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-8180 logs dapi-test-pod test-container' Oct 30 04:29:29.601: INFO: stderr: "" Oct 30 04:29:29.601: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-8180\nMY_POD_IP=10.244.4.177\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:29.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-8180" for this suite. • [SLOW TEST:6.831 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":5,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:29.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 30 04:29:29.774: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3957" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:29.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:31.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6193" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":6,"skipped":648,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:20.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Oct 30 04:29:46.402: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:46.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6078" for this suite. • [SLOW TEST:26.083 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":3,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:28.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-ba293e8b-b6d2-4b2b-a526-139e53907126 in namespace container-probe-4557 Oct 30 04:29:32.089: INFO: Started pod startup-override-ba293e8b-b6d2-4b2b-a526-139e53907126 in namespace container-probe-4557 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:32.091: INFO: Initial restart count of pod startup-override-ba293e8b-b6d2-4b2b-a526-139e53907126 is 1 Oct 30 04:29:52.130: INFO: Restart count of pod container-probe-4557/startup-override-ba293e8b-b6d2-4b2b-a526-139e53907126 is now 2 (20.038346277s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:52.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4557" for this suite. • [SLOW TEST:24.096 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":276,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:30.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-cb672eff-9a7e-4c26-b0ea-968be6c0e7b6 in namespace container-probe-5046 Oct 30 04:29:34.485: INFO: Started pod liveness-cb672eff-9a7e-4c26-b0ea-968be6c0e7b6 in namespace container-probe-5046 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:34.488: INFO: Initial restart count of pod liveness-cb672eff-9a7e-4c26-b0ea-968be6c0e7b6 is 0 Oct 30 04:29:52.529: INFO: Restart count of pod container-probe-5046/liveness-cb672eff-9a7e-4c26-b0ea-968be6c0e7b6 is now 1 (18.04067287s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:52.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5046" for this suite. • [SLOW TEST:22.105 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:32.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Oct 30 04:29:32.040: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:34.046: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:36.045: INFO: The status of Pod master is Running (Ready = true) Oct 30 04:29:36.061: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:38.064: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:40.064: INFO: The status of Pod slave is Running (Ready = true) Oct 30 04:29:40.077: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:42.081: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:44.084: INFO: The status of Pod private is Running (Ready = true) Oct 30 04:29:44.098: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:46.103: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:48.102: INFO: The status of Pod default is Running (Ready = true) Oct 30 04:29:48.107: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.107: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:48.574: INFO: Exec stderr: "" Oct 30 04:29:48.578: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.578: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:48.675: INFO: Exec stderr: "" Oct 30 04:29:48.678: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.678: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:48.760: INFO: Exec stderr: "" Oct 30 04:29:48.763: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.763: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:48.850: INFO: Exec stderr: "" Oct 30 04:29:48.854: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.854: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:48.975: INFO: Exec stderr: "" Oct 30 04:29:48.979: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:48.979: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.116: INFO: Exec stderr: "" Oct 30 04:29:49.119: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.119: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.211: INFO: Exec stderr: "" Oct 30 04:29:49.213: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.213: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.329: INFO: Exec stderr: "" Oct 30 04:29:49.331: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.331: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.413: INFO: Exec stderr: "" Oct 30 04:29:49.415: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.416: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.499: INFO: Exec stderr: "" Oct 30 04:29:49.502: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.502: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.586: INFO: Exec stderr: "" Oct 30 04:29:49.589: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.589: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.683: INFO: Exec stderr: "" Oct 30 04:29:49.686: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.686: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.768: INFO: Exec stderr: "" Oct 30 04:29:49.771: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.771: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.855: INFO: Exec stderr: "" Oct 30 04:29:49.858: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.858: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:49.948: INFO: Exec stderr: "" Oct 30 04:29:49.950: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:49.950: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:50.052: INFO: Exec stderr: "" Oct 30 04:29:50.054: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:50.054: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:50.153: INFO: Exec stderr: "" Oct 30 04:29:50.156: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:50.156: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:50.248: INFO: Exec stderr: "" Oct 30 04:29:50.250: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:50.250: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:50.339: INFO: Exec stderr: "" Oct 30 04:29:50.342: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:50.342: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:50.429: INFO: Exec stderr: "" Oct 30 04:29:52.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-3241"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-3241"/host; echo host > "/var/lib/kubelet/mount-propagation-3241"/host/file] Namespace:mount-propagation-3241 PodName:hostexec-node2-dph9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:29:52.445: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:52.553: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:52.553: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:52.655: INFO: pod master mount master: stdout: "master", stderr: "" error: Oct 30 04:29:52.657: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:52.657: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:52.751: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:52.755: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:52.755: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:52.842: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:52.845: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:52.845: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:52.954: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:52.957: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:52.957: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.044: INFO: pod master mount host: stdout: "host", stderr: "" error: Oct 30 04:29:53.047: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.047: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.129: INFO: pod slave mount master: stdout: "master", stderr: "" error: Oct 30 04:29:53.133: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.133: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.214: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Oct 30 04:29:53.216: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.216: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.322: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.325: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.325: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.409: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.411: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.411: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.493: INFO: pod slave mount host: stdout: "host", stderr: "" error: Oct 30 04:29:53.496: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.496: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.572: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.576: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.576: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.658: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.661: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.661: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.785: INFO: pod private mount private: stdout: "private", stderr: "" error: Oct 30 04:29:53.787: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.787: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.874: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.878: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:53.968: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:53.971: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:53.971: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.056: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:54.059: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.059: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.145: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:54.147: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.147: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.224: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:54.226: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.226: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.304: INFO: pod default mount default: stdout: "default", stderr: "" error: Oct 30 04:29:54.307: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.307: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.405: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 30 04:29:54.405: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-3241"/master/file` = master] Namespace:mount-propagation-3241 PodName:hostexec-node2-dph9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:29:54.405: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.502: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-3241"/slave/file] Namespace:mount-propagation-3241 PodName:hostexec-node2-dph9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:29:54.502: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.589: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-3241"/host] Namespace:mount-propagation-3241 PodName:hostexec-node2-dph9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:29:54.589: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.687: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-3241 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.687: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.776: INFO: Exec stderr: "" Oct 30 04:29:54.778: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-3241 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.778: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.897: INFO: Exec stderr: "" Oct 30 04:29:54.900: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-3241 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:54.900: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:54.998: INFO: Exec stderr: "" Oct 30 04:29:55.001: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-3241 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 04:29:55.001: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:29:55.094: INFO: Exec stderr: "" Oct 30 04:29:55.094: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-3241"] Namespace:mount-propagation-3241 PodName:hostexec-node2-dph9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 30 04:29:55.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-dph9d in namespace mount-propagation-3241 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:55.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-3241" for this suite. • [SLOW TEST:23.192 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":7,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:53.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:56.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-890" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":1825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:55.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:29:57.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-4467" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":8,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:56.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:00.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9734" for this suite. • ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:57.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:29:57.535: INFO: Waiting up to 5m0s for pod "security-context-8409525a-e44a-4e53-ad48-a187848fbc0c" in namespace "security-context-642" to be "Succeeded or Failed" Oct 30 04:29:57.538: INFO: Pod "security-context-8409525a-e44a-4e53-ad48-a187848fbc0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703006ms Oct 30 04:29:59.541: INFO: Pod "security-context-8409525a-e44a-4e53-ad48-a187848fbc0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005749833s Oct 30 04:30:01.545: INFO: Pod "security-context-8409525a-e44a-4e53-ad48-a187848fbc0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010122219s STEP: Saw pod success Oct 30 04:30:01.545: INFO: Pod "security-context-8409525a-e44a-4e53-ad48-a187848fbc0c" satisfied condition "Succeeded or Failed" Oct 30 04:30:01.547: INFO: Trying to get logs from node node2 pod security-context-8409525a-e44a-4e53-ad48-a187848fbc0c container test-container: STEP: delete the pod Oct 30 04:30:01.558: INFO: Waiting for pod security-context-8409525a-e44a-4e53-ad48-a187848fbc0c to disappear Oct 30 04:30:01.560: INFO: Pod security-context-8409525a-e44a-4e53-ad48-a187848fbc0c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:01.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-642" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":9,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:06.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-ee2b2604-3879-44d4-9a1c-7e4ccccc8dc7 in namespace container-probe-8563 Oct 30 04:29:10.530: INFO: Started pod busybox-ee2b2604-3879-44d4-9a1c-7e4ccccc8dc7 in namespace container-probe-8563 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:10.532: INFO: Initial restart count of pod busybox-ee2b2604-3879-44d4-9a1c-7e4ccccc8dc7 is 0 Oct 30 04:30:02.641: INFO: Restart count of pod container-probe-8563/busybox-ee2b2604-3879-44d4-9a1c-7e4ccccc8dc7 is now 1 (52.108397928s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:02.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8563" for this suite. • [SLOW TEST:56.165 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":3,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:01.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Oct 30 04:30:01.798: INFO: Waiting up to 5m0s for pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08" in namespace "pods-9218" to be "Succeeded or Failed" Oct 30 04:30:01.801: INFO: Pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075575ms Oct 30 04:30:03.804: INFO: Pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005493627s Oct 30 04:30:05.809: INFO: Pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010873913s Oct 30 04:30:07.813: INFO: Pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014752633s STEP: Saw pod success Oct 30 04:30:07.813: INFO: Pod "pod-always-succeed6b54963b-509f-4523-af8e-f2e44a6f4f08" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:09.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9218" for this suite. • [SLOW TEST:8.064 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:15.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-6be177d5-d04e-4d8c-a9f5-a12cb71c3e30 in namespace container-probe-1959 Oct 30 04:29:23.994: INFO: Started pod busybox-6be177d5-d04e-4d8c-a9f5-a12cb71c3e30 in namespace container-probe-1959 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:23.996: INFO: Initial restart count of pod busybox-6be177d5-d04e-4d8c-a9f5-a12cb71c3e30 is 0 Oct 30 04:30:10.096: INFO: Restart count of pod container-probe-1959/busybox-6be177d5-d04e-4d8c-a9f5-a12cb71c3e30 is now 1 (46.100055497s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:10.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1959" for this suite. • [SLOW TEST:54.158 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:10.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:10.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-541" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:02.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-5d06969f-255d-4bfd-8d89-45c29691bee3 bar STEP: verifying the node has the label fizz-d9f4adb8-99ba-4b83-9407-7e681695a8be buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-d9f4adb8-99ba-4b83-9407-7e681695a8be off the node node1 STEP: verifying the node doesn't have the label fizz-d9f4adb8-99ba-4b83-9407-7e681695a8be STEP: removing the label foo-5d06969f-255d-4bfd-8d89-45c29691bee3 off the node node1 STEP: verifying the node doesn't have the label foo-5d06969f-255d-4bfd-8d89-45c29691bee3 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:11.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9911" for this suite. • [SLOW TEST:8.134 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":4,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:16.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-2189dcb5-dfb1-442d-bf19-9c98b3a14f76 in namespace container-probe-9996 Oct 30 04:29:22.260: INFO: Started pod startup-2189dcb5-dfb1-442d-bf19-9c98b3a14f76 in namespace container-probe-9996 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:22.265: INFO: Initial restart count of pod startup-2189dcb5-dfb1-442d-bf19-9c98b3a14f76 is 0 Oct 30 04:30:22.401: INFO: Restart count of pod container-probe-9996/startup-2189dcb5-dfb1-442d-bf19-9c98b3a14f76 is now 1 (1m0.136934066s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:22.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9996" for this suite. • [SLOW TEST:66.196 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":3,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:22.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:22.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9481" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":4,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1030 04:28:59.946681 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.946: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.948: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-db3cbb95-e47d-44a4-8daa-43ba43189e82 in namespace container-probe-6288 Oct 30 04:29:07.966: INFO: Started pod startup-db3cbb95-e47d-44a4-8daa-43ba43189e82 in namespace container-probe-6288 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:07.969: INFO: Initial restart count of pod startup-db3cbb95-e47d-44a4-8daa-43ba43189e82 is 0 Oct 30 04:30:24.119: INFO: Restart count of pod container-probe-6288/startup-db3cbb95-e47d-44a4-8daa-43ba43189e82 is now 1 (1m16.149886716s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:24.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6288" for this suite. • [SLOW TEST:84.207 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":110,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:22.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:26.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9107" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:24.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Oct 30 04:30:24.188: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16" in namespace "security-context-test-9014" to be "Succeeded or Failed" Oct 30 04:30:24.190: INFO: Pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137219ms Oct 30 04:30:26.194: INFO: Pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005476008s Oct 30 04:30:28.197: INFO: Pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009089457s Oct 30 04:30:30.201: INFO: Pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012522391s Oct 30 04:30:30.201: INFO: Pod "alpine-nnp-nil-2b3b85ab-cf4e-4e59-a98e-91b675a8fa16" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:30.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9014" for this suite. • [SLOW TEST:6.066 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":119,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:27.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 30 04:30:27.061: INFO: Waiting up to 5m0s for pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc" in namespace "security-context-5889" to be "Succeeded or Failed" Oct 30 04:30:27.064: INFO: Pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370353ms Oct 30 04:30:29.070: INFO: Pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008494051s Oct 30 04:30:31.077: INFO: Pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015346894s Oct 30 04:30:33.081: INFO: Pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019312489s STEP: Saw pod success Oct 30 04:30:33.081: INFO: Pod "security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc" satisfied condition "Succeeded or Failed" Oct 30 04:30:33.083: INFO: Trying to get logs from node node2 pod security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc container test-container: STEP: delete the pod Oct 30 04:30:33.101: INFO: Waiting for pod security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc to disappear Oct 30 04:30:33.103: INFO: Pod security-context-8e140b40-8ca6-43ad-9824-03f2b06a5ecc no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:33.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5889" for this suite. • [SLOW TEST:6.080 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":6,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:30.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-c2b1884f-37e6-4b5a-96fb-113304ac62ec in namespace container-probe-8387 Oct 30 04:30:38.266: INFO: Started pod liveness-override-c2b1884f-37e6-4b5a-96fb-113304ac62ec in namespace container-probe-8387 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:30:38.268: INFO: Initial restart count of pod liveness-override-c2b1884f-37e6-4b5a-96fb-113304ac62ec is 0 Oct 30 04:30:46.285: INFO: Restart count of pod container-probe-8387/liveness-override-c2b1884f-37e6-4b5a-96fb-113304ac62ec is now 1 (8.016932547s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:46.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8387" for this suite. • [SLOW TEST:16.072 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":3,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:46.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 30 04:30:46.386: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:46.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-4702" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:11.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 in namespace kubelet-3727 I1030 04:30:11.230602 39 runners.go:190] Created replication controller with name: cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4, namespace: kubelet-3727, replica count: 20 I1030 04:30:21.282853 39 runners.go:190] cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 Pods: 20 out of 20 created, 4 running, 16 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 04:30:31.284575 39 runners.go:190] cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 04:30:32.286: INFO: Checking pods on node node2 via /runningpods endpoint Oct 30 04:30:32.287: INFO: Checking pods on node node1 via /runningpods endpoint Oct 30 04:30:32.356: INFO: Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.500 4011.97 1767.15 "runtime" 0.089 635.34 316.07 "kubelet" 0.089 635.34 316.07 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 2.279 6534.11 2343.43 "runtime" 0.705 2791.12 744.80 "kubelet" 0.705 2791.12 744.80 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.157 4405.41 1281.80 "runtime" 0.880 1712.38 569.05 "kubelet" 0.880 1712.38 569.05 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.113 666.46 279.58 "/" 0.489 5301.01 1768.95 "runtime" 0.113 666.46 279.58 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.359 3696.50 1504.21 "runtime" 0.101 557.17 221.90 "kubelet" 0.101 557.17 221.90 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 in namespace kubelet-3727, will wait for the garbage collector to delete the pods Oct 30 04:30:32.417: INFO: Deleting ReplicationController cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 took: 4.286151ms Oct 30 04:30:33.018: INFO: Terminating ReplicationController cleanup20-139a0340-08c7-4a98-8458-c3753fccc4f4 pods took: 600.743237ms Oct 30 04:30:54.619: INFO: Checking pods on node node2 via /runningpods endpoint Oct 30 04:30:54.619: INFO: Checking pods on node node1 via /runningpods endpoint Oct 30 04:30:54.635: INFO: Deleting 20 pods on 2 nodes completed in 1.017096159s after the RC was deleted Oct 30 04:30:54.636: INFO: CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.462 0.462 0.489 0.489 0.489 "runtime" 0.000 0.000 0.110 0.113 0.113 0.113 0.113 "kubelet" 0.000 0.000 0.110 0.113 0.113 0.113 0.113 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.336 0.336 0.359 0.359 0.359 "runtime" 0.000 0.000 0.090 0.099 0.099 0.099 0.099 "kubelet" 0.000 0.000 0.090 0.099 0.099 0.099 0.099 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.411 0.411 0.470 0.470 0.470 "runtime" 0.000 0.000 0.089 0.114 0.114 0.114 0.114 "kubelet" 0.000 0.000 0.089 0.114 0.114 0.114 0.114 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.460 1.515 1.515 1.515 1.515 "runtime" 0.000 0.000 0.456 0.473 0.473 0.473 0.473 "kubelet" 0.000 0.000 0.456 0.473 0.473 0.473 0.473 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.626 1.626 1.704 1.704 1.704 "runtime" 0.000 0.000 0.734 0.880 0.880 0.880 0.880 "kubelet" 0.000 0.000 0.734 0.880 0.880 0.880 0.880 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:54.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-3727" for this suite. • [SLOW TEST:43.488 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":5,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:54.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Oct 30 04:30:54.784: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:54.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-268" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSS ------------------------------ Oct 30 04:30:54.800: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:46.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Oct 30 04:30:46.761: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277" in namespace "security-context-test-137" to be "Succeeded or Failed" Oct 30 04:30:46.764: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.50036ms Oct 30 04:30:48.768: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006988321s Oct 30 04:30:50.773: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011838604s Oct 30 04:30:52.779: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018081953s Oct 30 04:30:54.782: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021050219s Oct 30 04:30:56.786: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024702279s Oct 30 04:30:56.786: INFO: Pod "alpine-nnp-true-6539b923-b210-4767-a04a-50e6df33f277" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:30:56.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-137" for this suite. • [SLOW TEST:10.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":8,"skipped":2027,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:00.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-d648a291-33e1-4942-b9a8-21519a7d444f in namespace container-probe-1076 Oct 30 04:30:04.585: INFO: Started pod busybox-d648a291-33e1-4942-b9a8-21519a7d444f in namespace container-probe-1076 Oct 30 04:30:04.585: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (1.016µs elapsed) Oct 30 04:30:06.585: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (2.000279966s elapsed) Oct 30 04:30:08.588: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (4.002495047s elapsed) Oct 30 04:30:10.589: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (6.004084193s elapsed) Oct 30 04:30:12.590: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (8.005142761s elapsed) Oct 30 04:30:14.591: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (10.00621273s elapsed) Oct 30 04:30:16.593: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (12.007480478s elapsed) Oct 30 04:30:18.593: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (14.008075907s elapsed) Oct 30 04:30:20.594: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (16.008876349s elapsed) Oct 30 04:30:22.595: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (18.010112606s elapsed) Oct 30 04:30:24.596: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (20.011157016s elapsed) Oct 30 04:30:26.598: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (22.012431307s elapsed) Oct 30 04:30:28.598: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (24.012957532s elapsed) Oct 30 04:30:30.599: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (26.013700993s elapsed) Oct 30 04:30:32.600: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (28.01499202s elapsed) Oct 30 04:30:34.601: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (30.015548282s elapsed) Oct 30 04:30:36.602: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (32.016817415s elapsed) Oct 30 04:30:38.603: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (34.017620382s elapsed) Oct 30 04:30:40.604: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (36.019005868s elapsed) Oct 30 04:30:42.605: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (38.020212964s elapsed) Oct 30 04:30:44.606: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (40.020864716s elapsed) Oct 30 04:30:46.607: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (42.022115555s elapsed) Oct 30 04:30:48.608: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (44.022858621s elapsed) Oct 30 04:30:50.609: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (46.024310447s elapsed) Oct 30 04:30:52.611: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (48.025582582s elapsed) Oct 30 04:30:54.611: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (50.0263108s elapsed) Oct 30 04:30:56.612: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (52.0269856s elapsed) Oct 30 04:30:58.613: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (54.028052711s elapsed) Oct 30 04:31:00.614: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (56.029317226s elapsed) Oct 30 04:31:02.615: INFO: pod container-probe-1076/busybox-d648a291-33e1-4942-b9a8-21519a7d444f is not ready (58.029919621s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:31:04.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1076" for this suite. • [SLOW TEST:64.080 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":9,"skipped":2027,"failed":0} Oct 30 04:31:04.632: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":10,"skipped":901,"failed":0} [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:09.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Oct 30 04:30:09.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-87 create -f -' Oct 30 04:30:10.298: INFO: stderr: "" Oct 30 04:30:10.298: INFO: stdout: "pod/liveness-exec created\n" Oct 30 04:30:10.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-87 create -f -' Oct 30 04:30:10.614: INFO: stderr: "" Oct 30 04:30:10.614: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Oct 30 04:30:20.622: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:20.622: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:22.626: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:22.626: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:24.630: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:24.630: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:26.635: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:26.635: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:28.638: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:28.638: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:30.643: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:30.643: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:32.647: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:32.647: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:34.650: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:34.650: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:36.653: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:36.653: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:38.657: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:38.657: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:40.664: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:40.664: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:42.668: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:42.668: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:44.670: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:44.671: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:46.674: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:46.674: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:48.677: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:48.678: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:50.683: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:50.683: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:52.689: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:52.689: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:54.693: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:54.693: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:56.696: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:56.696: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:30:58.700: INFO: Pod: liveness-http, restart count:0 Oct 30 04:30:58.700: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:00.704: INFO: Pod: liveness-http, restart count:0 Oct 30 04:31:00.704: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:02.708: INFO: Pod: liveness-http, restart count:1 Oct 30 04:31:02.708: INFO: Saw liveness-http restart, succeeded... Oct 30 04:31:02.709: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:04.713: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:06.717: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:08.721: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:10.726: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:12.730: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:14.734: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:16.740: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:18.744: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:20.748: INFO: Pod: liveness-exec, restart count:0 Oct 30 04:31:22.752: INFO: Pod: liveness-exec, restart count:1 Oct 30 04:31:22.752: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:31:22.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-87" for this suite. • [SLOW TEST:72.925 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":11,"skipped":901,"failed":0} Oct 30 04:31:22.763: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:28:59.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1030 04:28:59.908757 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:28:59.909: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:28:59.912: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-ad746852-ba6c-4012-99ab-6491fe19dbf5 in namespace container-probe-7244 Oct 30 04:29:09.930: INFO: Started pod liveness-ad746852-ba6c-4012-99ab-6491fe19dbf5 in namespace container-probe-7244 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:09.932: INFO: Initial restart count of pod liveness-ad746852-ba6c-4012-99ab-6491fe19dbf5 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:33:10.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7244" for this suite. • [SLOW TEST:250.653 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":71,"failed":0} Oct 30 04:33:10.545: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:33.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Oct 30 04:30:39.359: INFO: watch delete seen for pod-submit-status-2-0 Oct 30 04:30:39.359: INFO: Pod pod-submit-status-2-0 on node node2 timings total=6.124863675s t=188ms run=0s execute=0s Oct 30 04:30:40.361: INFO: watch delete seen for pod-submit-status-1-0 Oct 30 04:30:40.361: INFO: Pod pod-submit-status-1-0 on node node2 timings total=7.126171432s t=312ms run=0s execute=0s Oct 30 04:30:42.273: INFO: watch delete seen for pod-submit-status-0-0 Oct 30 04:30:42.273: INFO: Pod pod-submit-status-0-0 on node node2 timings total=9.03834019s t=1.97s run=0s execute=0s Oct 30 04:30:44.161: INFO: watch delete seen for pod-submit-status-1-1 Oct 30 04:30:44.162: INFO: Pod pod-submit-status-1-1 on node node2 timings total=3.80084161s t=1.69s run=0s execute=0s Oct 30 04:30:49.418: INFO: watch delete seen for pod-submit-status-0-1 Oct 30 04:30:49.418: INFO: Pod pod-submit-status-0-1 on node node2 timings total=7.145678945s t=1.582s run=0s execute=0s Oct 30 04:30:50.160: INFO: watch delete seen for pod-submit-status-2-1 Oct 30 04:30:50.160: INFO: Pod pod-submit-status-2-1 on node node2 timings total=10.800514563s t=764ms run=0s execute=0s Oct 30 04:30:51.760: INFO: watch delete seen for pod-submit-status-1-2 Oct 30 04:30:51.760: INFO: Pod pod-submit-status-1-2 on node node2 timings total=7.5980904s t=294ms run=0s execute=0s Oct 30 04:30:54.560: INFO: watch delete seen for pod-submit-status-1-3 Oct 30 04:30:54.561: INFO: Pod pod-submit-status-1-3 on node node2 timings total=2.800736563s t=104ms run=0s execute=0s Oct 30 04:30:56.960: INFO: watch delete seen for pod-submit-status-2-2 Oct 30 04:30:56.961: INFO: Pod pod-submit-status-2-2 on node node2 timings total=6.800654548s t=1.634s run=0s execute=0s Oct 30 04:30:57.359: INFO: watch delete seen for pod-submit-status-0-2 Oct 30 04:30:57.359: INFO: Pod pod-submit-status-0-2 on node node2 timings total=7.940797712s t=1.622s run=0s execute=0s Oct 30 04:31:02.889: INFO: watch delete seen for pod-submit-status-2-3 Oct 30 04:31:02.889: INFO: Pod pod-submit-status-2-3 on node node1 timings total=5.9288759s t=279ms run=0s execute=0s Oct 30 04:31:02.893: INFO: watch delete seen for pod-submit-status-1-4 Oct 30 04:31:02.893: INFO: Pod pod-submit-status-1-4 on node node2 timings total=8.332407909s t=298ms run=0s execute=0s Oct 30 04:31:13.006: INFO: watch delete seen for pod-submit-status-2-4 Oct 30 04:31:13.006: INFO: Pod pod-submit-status-2-4 on node node1 timings total=10.11687834s t=245ms run=0s execute=0s Oct 30 04:31:13.185: INFO: watch delete seen for pod-submit-status-1-5 Oct 30 04:31:13.185: INFO: Pod pod-submit-status-1-5 on node node1 timings total=10.291736221s t=1.951s run=3s execute=0s Oct 30 04:31:19.928: INFO: watch delete seen for pod-submit-status-0-3 Oct 30 04:31:19.928: INFO: Pod pod-submit-status-0-3 on node node2 timings total=22.568463505s t=1.56s run=0s execute=0s Oct 30 04:31:22.804: INFO: watch delete seen for pod-submit-status-2-5 Oct 30 04:31:22.805: INFO: Pod pod-submit-status-2-5 on node node1 timings total=9.798004834s t=490ms run=0s execute=0s Oct 30 04:31:22.814: INFO: watch delete seen for pod-submit-status-1-6 Oct 30 04:31:22.815: INFO: Pod pod-submit-status-1-6 on node node1 timings total=9.629698753s t=920ms run=0s execute=0s Oct 30 04:31:26.284: INFO: watch delete seen for pod-submit-status-2-6 Oct 30 04:31:26.284: INFO: Pod pod-submit-status-2-6 on node node1 timings total=3.479894953s t=463ms run=0s execute=0s Oct 30 04:31:32.826: INFO: watch delete seen for pod-submit-status-2-7 Oct 30 04:31:32.826: INFO: Pod pod-submit-status-2-7 on node node1 timings total=6.541275997s t=1.073s run=2s execute=0s Oct 30 04:31:32.893: INFO: watch delete seen for pod-submit-status-0-4 Oct 30 04:31:32.893: INFO: Pod pod-submit-status-0-4 on node node2 timings total=12.965122519s t=1.301s run=0s execute=0s Oct 30 04:31:32.901: INFO: watch delete seen for pod-submit-status-1-7 Oct 30 04:31:32.901: INFO: Pod pod-submit-status-1-7 on node node2 timings total=10.086444602s t=1.559s run=0s execute=0s Oct 30 04:31:42.894: INFO: watch delete seen for pod-submit-status-0-5 Oct 30 04:31:42.894: INFO: Pod pod-submit-status-0-5 on node node2 timings total=10.001045738s t=1.873s run=0s execute=0s Oct 30 04:31:42.905: INFO: watch delete seen for pod-submit-status-1-8 Oct 30 04:31:42.905: INFO: Pod pod-submit-status-1-8 on node node2 timings total=10.003838443s t=1.281s run=0s execute=0s Oct 30 04:31:42.922: INFO: watch delete seen for pod-submit-status-2-8 Oct 30 04:31:42.922: INFO: Pod pod-submit-status-2-8 on node node2 timings total=10.095773929s t=431ms run=0s execute=0s Oct 30 04:31:52.810: INFO: watch delete seen for pod-submit-status-1-9 Oct 30 04:31:52.810: INFO: Pod pod-submit-status-1-9 on node node1 timings total=9.905188222s t=1.988s run=4s execute=0s Oct 30 04:32:02.812: INFO: watch delete seen for pod-submit-status-1-10 Oct 30 04:32:02.812: INFO: Pod pod-submit-status-1-10 on node node1 timings total=10.002036494s t=133ms run=0s execute=0s Oct 30 04:32:02.889: INFO: watch delete seen for pod-submit-status-2-9 Oct 30 04:32:02.889: INFO: Pod pod-submit-status-2-9 on node node2 timings total=19.967049045s t=1.257s run=0s execute=0s Oct 30 04:32:12.898: INFO: watch delete seen for pod-submit-status-2-10 Oct 30 04:32:12.898: INFO: Pod pod-submit-status-2-10 on node node2 timings total=10.009451778s t=1.349s run=0s execute=0s Oct 30 04:32:13.002: INFO: watch delete seen for pod-submit-status-1-11 Oct 30 04:32:13.002: INFO: Pod pod-submit-status-1-11 on node node1 timings total=10.189733959s t=891ms run=0s execute=0s Oct 30 04:32:20.066: INFO: watch delete seen for pod-submit-status-1-12 Oct 30 04:32:20.066: INFO: Pod pod-submit-status-1-12 on node node2 timings total=7.063464324s t=995ms run=0s execute=0s Oct 30 04:32:20.077: INFO: watch delete seen for pod-submit-status-0-6 Oct 30 04:32:20.077: INFO: Pod pod-submit-status-0-6 on node node2 timings total=37.183080469s t=282ms run=0s execute=0s Oct 30 04:32:22.890: INFO: watch delete seen for pod-submit-status-2-11 Oct 30 04:32:22.890: INFO: Pod pod-submit-status-2-11 on node node2 timings total=9.991498535s t=84ms run=0s execute=0s Oct 30 04:32:32.811: INFO: watch delete seen for pod-submit-status-0-7 Oct 30 04:32:32.811: INFO: Pod pod-submit-status-0-7 on node node1 timings total=12.733563249s t=1.732s run=0s execute=0s Oct 30 04:32:32.820: INFO: watch delete seen for pod-submit-status-1-13 Oct 30 04:32:32.820: INFO: Pod pod-submit-status-1-13 on node node1 timings total=12.754517777s t=1.317s run=0s execute=0s Oct 30 04:32:32.890: INFO: watch delete seen for pod-submit-status-2-12 Oct 30 04:32:32.890: INFO: Pod pod-submit-status-2-12 on node node2 timings total=9.999991048s t=910ms run=0s execute=0s Oct 30 04:32:35.702: INFO: watch delete seen for pod-submit-status-0-8 Oct 30 04:32:35.702: INFO: Pod pod-submit-status-0-8 on node node2 timings total=2.890638262s t=809ms run=0s execute=0s Oct 30 04:32:42.899: INFO: watch delete seen for pod-submit-status-2-13 Oct 30 04:32:42.899: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.008945996s t=1.407s run=0s execute=0s Oct 30 04:32:42.908: INFO: watch delete seen for pod-submit-status-1-14 Oct 30 04:32:42.908: INFO: Pod pod-submit-status-1-14 on node node2 timings total=10.087952806s t=1.663s run=0s execute=0s Oct 30 04:32:42.924: INFO: watch delete seen for pod-submit-status-0-9 Oct 30 04:32:42.924: INFO: Pod pod-submit-status-0-9 on node node2 timings total=7.222385451s t=338ms run=0s execute=0s Oct 30 04:32:52.897: INFO: watch delete seen for pod-submit-status-2-14 Oct 30 04:32:52.897: INFO: Pod pod-submit-status-2-14 on node node2 timings total=9.998500522s t=207ms run=0s execute=0s Oct 30 04:33:02.806: INFO: watch delete seen for pod-submit-status-0-10 Oct 30 04:33:02.806: INFO: Pod pod-submit-status-0-10 on node node1 timings total=19.881471298s t=1.86s run=3s execute=0s Oct 30 04:33:05.741: INFO: watch delete seen for pod-submit-status-0-11 Oct 30 04:33:05.741: INFO: Pod pod-submit-status-0-11 on node node1 timings total=2.93559251s t=813ms run=0s execute=0s Oct 30 04:33:22.895: INFO: watch delete seen for pod-submit-status-0-12 Oct 30 04:33:22.895: INFO: Pod pod-submit-status-0-12 on node node2 timings total=17.153968899s t=1.37s run=0s execute=0s Oct 30 04:33:32.896: INFO: watch delete seen for pod-submit-status-0-13 Oct 30 04:33:32.896: INFO: Pod pod-submit-status-0-13 on node node2 timings total=10.000365894s t=27ms run=0s execute=0s Oct 30 04:33:32.931: INFO: watch delete seen for pod-submit-status-0-14 Oct 30 04:33:32.931: INFO: Pod pod-submit-status-0-14 on node node1 timings total=35.075331ms t=5ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:33:32.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8348" for this suite. • [SLOW TEST:179.725 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":7,"skipped":493,"failed":0} Oct 30 04:33:32.941: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:47.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-a47189ad-47b8-45da-9a33-bd1671182e42 in namespace container-probe-9387 Oct 30 04:29:51.142: INFO: Started pod startup-a47189ad-47b8-45da-9a33-bd1671182e42 in namespace container-probe-9387 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 04:29:51.145: INFO: Initial restart count of pod startup-a47189ad-47b8-45da-9a33-bd1671182e42 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:33:51.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9387" for this suite. • [SLOW TEST:244.625 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":4,"skipped":636,"failed":0} Oct 30 04:33:51.727: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:52.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Oct 30 04:29:52.198: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Oct 30 04:29:53.209: INFO: node status heartbeat is unchanged for 1.003647871s, waiting for 1m20s Oct 30 04:29:54.210: INFO: node status heartbeat is unchanged for 2.00398225s, waiting for 1m20s Oct 30 04:29:55.211: INFO: node status heartbeat is unchanged for 3.004886677s, waiting for 1m20s Oct 30 04:29:56.210: INFO: node status heartbeat is unchanged for 4.004593151s, waiting for 1m20s Oct 30 04:29:57.209: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:29:57.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:46 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:29:58.210: INFO: node status heartbeat is unchanged for 1.001565063s, waiting for 1m20s Oct 30 04:29:59.211: INFO: node status heartbeat is unchanged for 2.001724033s, waiting for 1m20s Oct 30 04:30:00.210: INFO: node status heartbeat is unchanged for 3.001466662s, waiting for 1m20s Oct 30 04:30:01.209: INFO: node status heartbeat is unchanged for 3.999802701s, waiting for 1m20s Oct 30 04:30:02.210: INFO: node status heartbeat is unchanged for 5.00111892s, waiting for 1m20s Oct 30 04:30:03.210: INFO: node status heartbeat is unchanged for 6.00069936s, waiting for 1m20s Oct 30 04:30:04.209: INFO: node status heartbeat is unchanged for 7.000114677s, waiting for 1m20s Oct 30 04:30:05.210: INFO: node status heartbeat is unchanged for 8.001424246s, waiting for 1m20s Oct 30 04:30:06.210: INFO: node status heartbeat is unchanged for 9.000871627s, waiting for 1m20s Oct 30 04:30:07.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:30:07.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:29:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "3bf4179125e4495c89c046ed0ae7baf7", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "ce868148-dc5e-4c7c-a555-42ee929547f7", KernelVersion: "3.10.0-1160.45.1.el7.x86_64", ...}, Images: []v1.ContainerImage{ ... // 33 identical elements {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, {Names: {"alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c"..., "alpine:3.12"}, SizeBytes: 5581415}, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361}, {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369}, ... // 2 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Oct 30 04:30:08.210: INFO: node status heartbeat is unchanged for 998.628491ms, waiting for 1m20s Oct 30 04:30:09.210: INFO: node status heartbeat is unchanged for 1.998669373s, waiting for 1m20s Oct 30 04:30:10.210: INFO: node status heartbeat is unchanged for 2.999169472s, waiting for 1m20s Oct 30 04:30:11.209: INFO: node status heartbeat is unchanged for 3.997358453s, waiting for 1m20s Oct 30 04:30:12.211: INFO: node status heartbeat is unchanged for 5.000001799s, waiting for 1m20s Oct 30 04:30:13.210: INFO: node status heartbeat is unchanged for 5.998684362s, waiting for 1m20s Oct 30 04:30:14.212: INFO: node status heartbeat is unchanged for 7.000988118s, waiting for 1m20s Oct 30 04:30:15.211: INFO: node status heartbeat is unchanged for 8.000165163s, waiting for 1m20s Oct 30 04:30:16.210: INFO: node status heartbeat is unchanged for 8.998615371s, waiting for 1m20s Oct 30 04:30:17.210: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 30 04:30:17.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:30:18.210: INFO: node status heartbeat is unchanged for 1.000266734s, waiting for 1m20s Oct 30 04:30:19.209: INFO: node status heartbeat is unchanged for 1.99934257s, waiting for 1m20s Oct 30 04:30:20.211: INFO: node status heartbeat is unchanged for 3.000506037s, waiting for 1m20s Oct 30 04:30:21.210: INFO: node status heartbeat is unchanged for 3.999615524s, waiting for 1m20s Oct 30 04:30:22.209: INFO: node status heartbeat is unchanged for 4.99935662s, waiting for 1m20s Oct 30 04:30:23.210: INFO: node status heartbeat is unchanged for 5.999759353s, waiting for 1m20s Oct 30 04:30:24.211: INFO: node status heartbeat is unchanged for 7.001298697s, waiting for 1m20s Oct 30 04:30:25.210: INFO: node status heartbeat is unchanged for 7.999735016s, waiting for 1m20s Oct 30 04:30:26.210: INFO: node status heartbeat is unchanged for 8.999723523s, waiting for 1m20s Oct 30 04:30:27.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:30:27.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:30:28.210: INFO: node status heartbeat is unchanged for 999.054351ms, waiting for 1m20s Oct 30 04:30:29.210: INFO: node status heartbeat is unchanged for 1.998720371s, waiting for 1m20s Oct 30 04:30:30.210: INFO: node status heartbeat is unchanged for 2.999058425s, waiting for 1m20s Oct 30 04:30:31.212: INFO: node status heartbeat is unchanged for 4.000753884s, waiting for 1m20s Oct 30 04:30:32.211: INFO: node status heartbeat is unchanged for 5.000263679s, waiting for 1m20s Oct 30 04:30:33.210: INFO: node status heartbeat is unchanged for 5.998762377s, waiting for 1m20s Oct 30 04:30:34.209: INFO: node status heartbeat is unchanged for 6.997729464s, waiting for 1m20s Oct 30 04:30:35.210: INFO: node status heartbeat is unchanged for 7.998582697s, waiting for 1m20s Oct 30 04:30:36.211: INFO: node status heartbeat is unchanged for 8.999899759s, waiting for 1m20s Oct 30 04:30:37.211: INFO: node status heartbeat is unchanged for 9.999910069s, waiting for 1m20s Oct 30 04:30:38.209: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:30:38.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:30:39.211: INFO: node status heartbeat is unchanged for 1.001685198s, waiting for 1m20s Oct 30 04:30:40.210: INFO: node status heartbeat is unchanged for 2.000525172s, waiting for 1m20s Oct 30 04:30:41.211: INFO: node status heartbeat is unchanged for 3.002284815s, waiting for 1m20s Oct 30 04:30:42.213: INFO: node status heartbeat is unchanged for 4.0038302s, waiting for 1m20s Oct 30 04:30:43.210: INFO: node status heartbeat is unchanged for 5.001211217s, waiting for 1m20s Oct 30 04:30:44.209: INFO: node status heartbeat is unchanged for 6.00028016s, waiting for 1m20s Oct 30 04:30:45.211: INFO: node status heartbeat is unchanged for 7.001403255s, waiting for 1m20s Oct 30 04:30:46.209: INFO: node status heartbeat is unchanged for 8.00022936s, waiting for 1m20s Oct 30 04:30:47.213: INFO: node status heartbeat is unchanged for 9.00361462s, waiting for 1m20s Oct 30 04:30:48.209: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:30:48.213: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:30:49.213: INFO: node status heartbeat is unchanged for 1.004056608s, waiting for 1m20s Oct 30 04:30:50.211: INFO: node status heartbeat is unchanged for 2.002472896s, waiting for 1m20s Oct 30 04:30:51.209: INFO: node status heartbeat is unchanged for 3.000522899s, waiting for 1m20s Oct 30 04:30:52.211: INFO: node status heartbeat is unchanged for 4.001947005s, waiting for 1m20s Oct 30 04:30:53.210: INFO: node status heartbeat is unchanged for 5.001705959s, waiting for 1m20s Oct 30 04:30:54.211: INFO: node status heartbeat is unchanged for 6.001954634s, waiting for 1m20s Oct 30 04:30:55.211: INFO: node status heartbeat is unchanged for 7.00199398s, waiting for 1m20s Oct 30 04:30:56.212: INFO: node status heartbeat is unchanged for 8.003198874s, waiting for 1m20s Oct 30 04:30:57.211: INFO: node status heartbeat is unchanged for 9.00242226s, waiting for 1m20s Oct 30 04:30:58.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:30:58.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:30:59.212: INFO: node status heartbeat is unchanged for 1.00205519s, waiting for 1m20s Oct 30 04:31:00.210: INFO: node status heartbeat is unchanged for 2.000537454s, waiting for 1m20s Oct 30 04:31:01.210: INFO: node status heartbeat is unchanged for 2.99987264s, waiting for 1m20s Oct 30 04:31:02.211: INFO: node status heartbeat is unchanged for 4.001097562s, waiting for 1m20s Oct 30 04:31:03.209: INFO: node status heartbeat is unchanged for 4.999352499s, waiting for 1m20s Oct 30 04:31:04.210: INFO: node status heartbeat is unchanged for 6.000393998s, waiting for 1m20s Oct 30 04:31:05.210: INFO: node status heartbeat is unchanged for 6.999885066s, waiting for 1m20s Oct 30 04:31:06.211: INFO: node status heartbeat is unchanged for 8.001290348s, waiting for 1m20s Oct 30 04:31:07.213: INFO: node status heartbeat is unchanged for 9.003391221s, waiting for 1m20s Oct 30 04:31:08.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:31:08.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:30:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:31:09.210: INFO: node status heartbeat is unchanged for 1.000057945s, waiting for 1m20s Oct 30 04:31:10.210: INFO: node status heartbeat is unchanged for 2.000535117s, waiting for 1m20s Oct 30 04:31:11.213: INFO: node status heartbeat is unchanged for 3.003545259s, waiting for 1m20s Oct 30 04:31:12.212: INFO: node status heartbeat is unchanged for 4.00215861s, waiting for 1m20s Oct 30 04:31:13.210: INFO: node status heartbeat is unchanged for 4.999868559s, waiting for 1m20s Oct 30 04:31:14.210: INFO: node status heartbeat is unchanged for 6.000031183s, waiting for 1m20s Oct 30 04:31:15.209: INFO: node status heartbeat is unchanged for 6.999674188s, waiting for 1m20s Oct 30 04:31:16.212: INFO: node status heartbeat is unchanged for 8.00260749s, waiting for 1m20s Oct 30 04:31:17.209: INFO: node status heartbeat is unchanged for 8.999167199s, waiting for 1m20s Oct 30 04:31:18.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:31:18.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:31:19.210: INFO: node status heartbeat is unchanged for 1.000669274s, waiting for 1m20s Oct 30 04:31:20.210: INFO: node status heartbeat is unchanged for 2.000654429s, waiting for 1m20s Oct 30 04:31:21.210: INFO: node status heartbeat is unchanged for 3.000283082s, waiting for 1m20s Oct 30 04:31:22.209: INFO: node status heartbeat is unchanged for 3.999418704s, waiting for 1m20s Oct 30 04:31:23.211: INFO: node status heartbeat is unchanged for 5.00107818s, waiting for 1m20s Oct 30 04:31:24.211: INFO: node status heartbeat is unchanged for 6.000860599s, waiting for 1m20s Oct 30 04:31:25.209: INFO: node status heartbeat is unchanged for 6.999376493s, waiting for 1m20s Oct 30 04:31:26.209: INFO: node status heartbeat is unchanged for 7.999323489s, waiting for 1m20s Oct 30 04:31:27.210: INFO: node status heartbeat is unchanged for 9.000452449s, waiting for 1m20s Oct 30 04:31:28.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:31:28.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:31:29.210: INFO: node status heartbeat is unchanged for 1.000153032s, waiting for 1m20s Oct 30 04:31:30.210: INFO: node status heartbeat is unchanged for 1.999616699s, waiting for 1m20s Oct 30 04:31:31.210: INFO: node status heartbeat is unchanged for 3.000047833s, waiting for 1m20s Oct 30 04:31:32.210: INFO: node status heartbeat is unchanged for 3.999353727s, waiting for 1m20s Oct 30 04:31:33.209: INFO: node status heartbeat is unchanged for 4.998899377s, waiting for 1m20s Oct 30 04:31:34.211: INFO: node status heartbeat is unchanged for 6.001071461s, waiting for 1m20s Oct 30 04:31:35.210: INFO: node status heartbeat is unchanged for 6.999561861s, waiting for 1m20s Oct 30 04:31:36.211: INFO: node status heartbeat is unchanged for 8.000561212s, waiting for 1m20s Oct 30 04:31:37.210: INFO: node status heartbeat is unchanged for 8.999948902s, waiting for 1m20s Oct 30 04:31:38.209: INFO: node status heartbeat is unchanged for 9.999111761s, waiting for 1m20s Oct 30 04:31:39.210: INFO: node status heartbeat is unchanged for 10.999548188s, waiting for 1m20s Oct 30 04:31:40.210: INFO: node status heartbeat changed in 12s (with other status changes), waiting for 40s Oct 30 04:31:40.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:31:41.210: INFO: node status heartbeat is unchanged for 1.00012979s, waiting for 1m20s Oct 30 04:31:42.211: INFO: node status heartbeat is unchanged for 2.001255285s, waiting for 1m20s Oct 30 04:31:43.210: INFO: node status heartbeat is unchanged for 3.000405152s, waiting for 1m20s Oct 30 04:31:44.210: INFO: node status heartbeat is unchanged for 4.00055213s, waiting for 1m20s Oct 30 04:31:45.210: INFO: node status heartbeat is unchanged for 5.000393352s, waiting for 1m20s Oct 30 04:31:46.211: INFO: node status heartbeat is unchanged for 6.001549056s, waiting for 1m20s Oct 30 04:31:47.225: INFO: node status heartbeat is unchanged for 7.015435618s, waiting for 1m20s Oct 30 04:31:48.209: INFO: node status heartbeat is unchanged for 7.999387264s, waiting for 1m20s Oct 30 04:31:49.210: INFO: node status heartbeat is unchanged for 9.000736198s, waiting for 1m20s Oct 30 04:31:50.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:31:50.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:31:51.211: INFO: node status heartbeat is unchanged for 999.659388ms, waiting for 1m20s Oct 30 04:31:52.211: INFO: node status heartbeat is unchanged for 1.99962259s, waiting for 1m20s Oct 30 04:31:53.210: INFO: node status heartbeat is unchanged for 2.998986248s, waiting for 1m20s Oct 30 04:31:54.211: INFO: node status heartbeat is unchanged for 4.000287278s, waiting for 1m20s Oct 30 04:31:55.212: INFO: node status heartbeat is unchanged for 5.000378554s, waiting for 1m20s Oct 30 04:31:56.212: INFO: node status heartbeat is unchanged for 6.000563723s, waiting for 1m20s Oct 30 04:31:57.210: INFO: node status heartbeat is unchanged for 6.998502807s, waiting for 1m20s Oct 30 04:31:58.211: INFO: node status heartbeat is unchanged for 7.999796434s, waiting for 1m20s Oct 30 04:31:59.213: INFO: node status heartbeat is unchanged for 9.001635726s, waiting for 1m20s Oct 30 04:32:00.212: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:32:00.217: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:49 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:01.209: INFO: node status heartbeat is unchanged for 996.872134ms, waiting for 1m20s Oct 30 04:32:02.211: INFO: node status heartbeat is unchanged for 1.998281219s, waiting for 1m20s Oct 30 04:32:03.209: INFO: node status heartbeat is unchanged for 2.996998599s, waiting for 1m20s Oct 30 04:32:04.212: INFO: node status heartbeat is unchanged for 3.999618705s, waiting for 1m20s Oct 30 04:32:05.210: INFO: node status heartbeat is unchanged for 4.998022996s, waiting for 1m20s Oct 30 04:32:06.212: INFO: node status heartbeat is unchanged for 6.000007881s, waiting for 1m20s Oct 30 04:32:07.210: INFO: node status heartbeat is unchanged for 6.997917842s, waiting for 1m20s Oct 30 04:32:08.210: INFO: node status heartbeat is unchanged for 7.997349894s, waiting for 1m20s Oct 30 04:32:09.211: INFO: node status heartbeat is unchanged for 8.998765171s, waiting for 1m20s Oct 30 04:32:10.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:32:10.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:31:59 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:11.213: INFO: node status heartbeat is unchanged for 1.001565175s, waiting for 1m20s Oct 30 04:32:12.211: INFO: node status heartbeat is unchanged for 2.000307233s, waiting for 1m20s Oct 30 04:32:13.210: INFO: node status heartbeat is unchanged for 2.998709971s, waiting for 1m20s Oct 30 04:32:14.213: INFO: node status heartbeat is unchanged for 4.00220853s, waiting for 1m20s Oct 30 04:32:15.211: INFO: node status heartbeat is unchanged for 5.000266694s, waiting for 1m20s Oct 30 04:32:16.210: INFO: node status heartbeat is unchanged for 5.998690123s, waiting for 1m20s Oct 30 04:32:17.210: INFO: node status heartbeat is unchanged for 6.999115828s, waiting for 1m20s Oct 30 04:32:18.210: INFO: node status heartbeat is unchanged for 7.999157713s, waiting for 1m20s Oct 30 04:32:19.212: INFO: node status heartbeat is unchanged for 9.000698013s, waiting for 1m20s Oct 30 04:32:20.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:32:20.214: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:09 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:21.210: INFO: node status heartbeat is unchanged for 1.000392181s, waiting for 1m20s Oct 30 04:32:22.210: INFO: node status heartbeat is unchanged for 2.000919767s, waiting for 1m20s Oct 30 04:32:23.210: INFO: node status heartbeat is unchanged for 3.000775s, waiting for 1m20s Oct 30 04:32:24.210: INFO: node status heartbeat is unchanged for 4.000352072s, waiting for 1m20s Oct 30 04:32:25.210: INFO: node status heartbeat is unchanged for 5.000951418s, waiting for 1m20s Oct 30 04:32:26.212: INFO: node status heartbeat is unchanged for 6.002601934s, waiting for 1m20s Oct 30 04:32:27.210: INFO: node status heartbeat is unchanged for 7.000632388s, waiting for 1m20s Oct 30 04:32:28.210: INFO: node status heartbeat is unchanged for 8.000175648s, waiting for 1m20s Oct 30 04:32:29.210: INFO: node status heartbeat is unchanged for 9.000516348s, waiting for 1m20s Oct 30 04:32:30.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:32:30.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:19 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:31.212: INFO: node status heartbeat is unchanged for 1.000548224s, waiting for 1m20s Oct 30 04:32:32.212: INFO: node status heartbeat is unchanged for 2.000790534s, waiting for 1m20s Oct 30 04:32:33.210: INFO: node status heartbeat is unchanged for 2.998552168s, waiting for 1m20s Oct 30 04:32:34.212: INFO: node status heartbeat is unchanged for 4.000457665s, waiting for 1m20s Oct 30 04:32:35.211: INFO: node status heartbeat is unchanged for 4.999553678s, waiting for 1m20s Oct 30 04:32:36.212: INFO: node status heartbeat is unchanged for 6.000521721s, waiting for 1m20s Oct 30 04:32:37.210: INFO: node status heartbeat is unchanged for 6.998361985s, waiting for 1m20s Oct 30 04:32:38.211: INFO: node status heartbeat is unchanged for 7.999601954s, waiting for 1m20s Oct 30 04:32:39.210: INFO: node status heartbeat is unchanged for 8.998420092s, waiting for 1m20s Oct 30 04:32:40.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:32:40.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:29 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:41.210: INFO: node status heartbeat is unchanged for 999.291302ms, waiting for 1m20s Oct 30 04:32:42.209: INFO: node status heartbeat is unchanged for 1.998424557s, waiting for 1m20s Oct 30 04:32:43.210: INFO: node status heartbeat is unchanged for 2.999956475s, waiting for 1m20s Oct 30 04:32:44.210: INFO: node status heartbeat is unchanged for 3.999499735s, waiting for 1m20s Oct 30 04:32:45.209: INFO: node status heartbeat is unchanged for 4.998897039s, waiting for 1m20s Oct 30 04:32:46.211: INFO: node status heartbeat is unchanged for 6.000663719s, waiting for 1m20s Oct 30 04:32:47.210: INFO: node status heartbeat is unchanged for 7.000262374s, waiting for 1m20s Oct 30 04:32:48.210: INFO: node status heartbeat is unchanged for 7.999400091s, waiting for 1m20s Oct 30 04:32:49.211: INFO: node status heartbeat is unchanged for 9.000451946s, waiting for 1m20s Oct 30 04:32:50.210: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 30 04:32:50.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:39 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:32:51.210: INFO: node status heartbeat is unchanged for 999.835002ms, waiting for 1m20s Oct 30 04:32:52.212: INFO: node status heartbeat is unchanged for 2.001566298s, waiting for 1m20s Oct 30 04:32:53.211: INFO: node status heartbeat is unchanged for 3.00099415s, waiting for 1m20s Oct 30 04:32:54.212: INFO: node status heartbeat is unchanged for 4.00181663s, waiting for 1m20s Oct 30 04:32:55.212: INFO: node status heartbeat is unchanged for 5.002152717s, waiting for 1m20s Oct 30 04:32:56.213: INFO: node status heartbeat is unchanged for 6.002631574s, waiting for 1m20s Oct 30 04:32:57.212: INFO: node status heartbeat is unchanged for 7.001534183s, waiting for 1m20s Oct 30 04:32:58.210: INFO: node status heartbeat is unchanged for 7.99984949s, waiting for 1m20s Oct 30 04:32:59.212: INFO: node status heartbeat is unchanged for 9.002383573s, waiting for 1m20s Oct 30 04:33:00.212: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:00.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:32:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:01.214: INFO: node status heartbeat is unchanged for 1.002012386s, waiting for 1m20s Oct 30 04:33:02.212: INFO: node status heartbeat is unchanged for 2.000502179s, waiting for 1m20s Oct 30 04:33:03.211: INFO: node status heartbeat is unchanged for 2.999191115s, waiting for 1m20s Oct 30 04:33:04.209: INFO: node status heartbeat is unchanged for 3.997747528s, waiting for 1m20s Oct 30 04:33:05.210: INFO: node status heartbeat is unchanged for 4.998513833s, waiting for 1m20s Oct 30 04:33:06.213: INFO: node status heartbeat is unchanged for 6.001510274s, waiting for 1m20s Oct 30 04:33:07.209: INFO: node status heartbeat is unchanged for 6.997710346s, waiting for 1m20s Oct 30 04:33:08.209: INFO: node status heartbeat is unchanged for 7.997931998s, waiting for 1m20s Oct 30 04:33:09.210: INFO: node status heartbeat is unchanged for 8.998372233s, waiting for 1m20s Oct 30 04:33:10.211: INFO: node status heartbeat is unchanged for 9.999512031s, waiting for 1m20s Oct 30 04:33:11.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:11.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:12.210: INFO: node status heartbeat is unchanged for 999.770885ms, waiting for 1m20s Oct 30 04:33:13.210: INFO: node status heartbeat is unchanged for 1.999443726s, waiting for 1m20s Oct 30 04:33:14.211: INFO: node status heartbeat is unchanged for 3.000317302s, waiting for 1m20s Oct 30 04:33:15.210: INFO: node status heartbeat is unchanged for 4.000224239s, waiting for 1m20s Oct 30 04:33:16.212: INFO: node status heartbeat is unchanged for 5.001979369s, waiting for 1m20s Oct 30 04:33:17.212: INFO: node status heartbeat is unchanged for 6.001670552s, waiting for 1m20s Oct 30 04:33:18.210: INFO: node status heartbeat is unchanged for 6.999406857s, waiting for 1m20s Oct 30 04:33:19.212: INFO: node status heartbeat is unchanged for 8.002136437s, waiting for 1m20s Oct 30 04:33:20.211: INFO: node status heartbeat is unchanged for 9.000874356s, waiting for 1m20s Oct 30 04:33:21.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:21.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:22.211: INFO: node status heartbeat is unchanged for 999.650845ms, waiting for 1m20s Oct 30 04:33:23.210: INFO: node status heartbeat is unchanged for 1.999418929s, waiting for 1m20s Oct 30 04:33:24.210: INFO: node status heartbeat is unchanged for 2.999004366s, waiting for 1m20s Oct 30 04:33:25.210: INFO: node status heartbeat is unchanged for 3.998717151s, waiting for 1m20s Oct 30 04:33:26.210: INFO: node status heartbeat is unchanged for 4.999525577s, waiting for 1m20s Oct 30 04:33:27.210: INFO: node status heartbeat is unchanged for 5.998592585s, waiting for 1m20s Oct 30 04:33:28.209: INFO: node status heartbeat is unchanged for 6.998290118s, waiting for 1m20s Oct 30 04:33:29.210: INFO: node status heartbeat is unchanged for 7.998603736s, waiting for 1m20s Oct 30 04:33:30.211: INFO: node status heartbeat is unchanged for 9.000301501s, waiting for 1m20s Oct 30 04:33:31.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:31.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:32.211: INFO: node status heartbeat is unchanged for 1.001461006s, waiting for 1m20s Oct 30 04:33:33.210: INFO: node status heartbeat is unchanged for 1.999909955s, waiting for 1m20s Oct 30 04:33:34.210: INFO: node status heartbeat is unchanged for 2.99985208s, waiting for 1m20s Oct 30 04:33:35.210: INFO: node status heartbeat is unchanged for 3.999734658s, waiting for 1m20s Oct 30 04:33:36.212: INFO: node status heartbeat is unchanged for 5.002330244s, waiting for 1m20s Oct 30 04:33:37.211: INFO: node status heartbeat is unchanged for 6.00090374s, waiting for 1m20s Oct 30 04:33:38.209: INFO: node status heartbeat is unchanged for 6.999300378s, waiting for 1m20s Oct 30 04:33:39.212: INFO: node status heartbeat is unchanged for 8.001986069s, waiting for 1m20s Oct 30 04:33:40.210: INFO: node status heartbeat is unchanged for 9.000226745s, waiting for 1m20s Oct 30 04:33:41.214: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:41.219: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:42.213: INFO: node status heartbeat is unchanged for 999.123155ms, waiting for 1m20s Oct 30 04:33:43.210: INFO: node status heartbeat is unchanged for 1.995769636s, waiting for 1m20s Oct 30 04:33:44.212: INFO: node status heartbeat is unchanged for 2.99801311s, waiting for 1m20s Oct 30 04:33:45.211: INFO: node status heartbeat is unchanged for 3.997324892s, waiting for 1m20s Oct 30 04:33:46.211: INFO: node status heartbeat is unchanged for 4.997077861s, waiting for 1m20s Oct 30 04:33:47.213: INFO: node status heartbeat is unchanged for 5.998632575s, waiting for 1m20s Oct 30 04:33:48.210: INFO: node status heartbeat is unchanged for 6.995959059s, waiting for 1m20s Oct 30 04:33:49.212: INFO: node status heartbeat is unchanged for 7.997516878s, waiting for 1m20s Oct 30 04:33:50.210: INFO: node status heartbeat is unchanged for 8.996151458s, waiting for 1m20s Oct 30 04:33:51.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:33:51.216: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:33:52.209: INFO: node status heartbeat is unchanged for 997.520395ms, waiting for 1m20s Oct 30 04:33:53.210: INFO: node status heartbeat is unchanged for 1.998550064s, waiting for 1m20s Oct 30 04:33:54.212: INFO: node status heartbeat is unchanged for 3.001150502s, waiting for 1m20s Oct 30 04:33:55.212: INFO: node status heartbeat is unchanged for 4.000287061s, waiting for 1m20s Oct 30 04:33:56.213: INFO: node status heartbeat is unchanged for 5.001412041s, waiting for 1m20s Oct 30 04:33:57.213: INFO: node status heartbeat is unchanged for 6.001796299s, waiting for 1m20s Oct 30 04:33:58.210: INFO: node status heartbeat is unchanged for 6.99823498s, waiting for 1m20s Oct 30 04:33:59.213: INFO: node status heartbeat is unchanged for 8.00151983s, waiting for 1m20s Oct 30 04:34:00.211: INFO: node status heartbeat is unchanged for 9.000104862s, waiting for 1m20s Oct 30 04:34:01.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:01.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:33:50 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:02.212: INFO: node status heartbeat is unchanged for 1.001368794s, waiting for 1m20s Oct 30 04:34:03.211: INFO: node status heartbeat is unchanged for 2.000661382s, waiting for 1m20s Oct 30 04:34:04.211: INFO: node status heartbeat is unchanged for 3.000372478s, waiting for 1m20s Oct 30 04:34:05.212: INFO: node status heartbeat is unchanged for 4.001022599s, waiting for 1m20s Oct 30 04:34:06.211: INFO: node status heartbeat is unchanged for 5.000143323s, waiting for 1m20s Oct 30 04:34:07.210: INFO: node status heartbeat is unchanged for 5.99954341s, waiting for 1m20s Oct 30 04:34:08.210: INFO: node status heartbeat is unchanged for 6.999654561s, waiting for 1m20s Oct 30 04:34:09.210: INFO: node status heartbeat is unchanged for 7.99986061s, waiting for 1m20s Oct 30 04:34:10.211: INFO: node status heartbeat is unchanged for 9.000252073s, waiting for 1m20s Oct 30 04:34:11.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:11.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:00 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:12.210: INFO: node status heartbeat is unchanged for 999.717236ms, waiting for 1m20s Oct 30 04:34:13.209: INFO: node status heartbeat is unchanged for 1.999022984s, waiting for 1m20s Oct 30 04:34:14.210: INFO: node status heartbeat is unchanged for 3.000030645s, waiting for 1m20s Oct 30 04:34:15.212: INFO: node status heartbeat is unchanged for 4.00180746s, waiting for 1m20s Oct 30 04:34:16.210: INFO: node status heartbeat is unchanged for 4.999402009s, waiting for 1m20s Oct 30 04:34:17.210: INFO: node status heartbeat is unchanged for 5.999921072s, waiting for 1m20s Oct 30 04:34:18.210: INFO: node status heartbeat is unchanged for 7.000037455s, waiting for 1m20s Oct 30 04:34:19.209: INFO: node status heartbeat is unchanged for 7.998739457s, waiting for 1m20s Oct 30 04:34:20.209: INFO: node status heartbeat is unchanged for 8.998887129s, waiting for 1m20s Oct 30 04:34:21.210: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:21.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:10 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:22.211: INFO: node status heartbeat is unchanged for 1.000475557s, waiting for 1m20s Oct 30 04:34:23.211: INFO: node status heartbeat is unchanged for 2.000181837s, waiting for 1m20s Oct 30 04:34:24.210: INFO: node status heartbeat is unchanged for 2.999445362s, waiting for 1m20s Oct 30 04:34:25.210: INFO: node status heartbeat is unchanged for 3.999767549s, waiting for 1m20s Oct 30 04:34:26.211: INFO: node status heartbeat is unchanged for 5.000402941s, waiting for 1m20s Oct 30 04:34:27.210: INFO: node status heartbeat is unchanged for 5.999873176s, waiting for 1m20s Oct 30 04:34:28.210: INFO: node status heartbeat is unchanged for 6.999546844s, waiting for 1m20s Oct 30 04:34:29.211: INFO: node status heartbeat is unchanged for 8.00040722s, waiting for 1m20s Oct 30 04:34:30.210: INFO: node status heartbeat is unchanged for 8.999247381s, waiting for 1m20s Oct 30 04:34:31.211: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:31.215: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:20 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:32.210: INFO: node status heartbeat is unchanged for 999.470792ms, waiting for 1m20s Oct 30 04:34:33.210: INFO: node status heartbeat is unchanged for 1.999362705s, waiting for 1m20s Oct 30 04:34:34.213: INFO: node status heartbeat is unchanged for 3.002177954s, waiting for 1m20s Oct 30 04:34:35.211: INFO: node status heartbeat is unchanged for 4.00040287s, waiting for 1m20s Oct 30 04:34:36.211: INFO: node status heartbeat is unchanged for 5.000765351s, waiting for 1m20s Oct 30 04:34:37.211: INFO: node status heartbeat is unchanged for 6.000100616s, waiting for 1m20s Oct 30 04:34:38.209: INFO: node status heartbeat is unchanged for 6.998804931s, waiting for 1m20s Oct 30 04:34:39.212: INFO: node status heartbeat is unchanged for 8.001822841s, waiting for 1m20s Oct 30 04:34:40.211: INFO: node status heartbeat is unchanged for 9.000248548s, waiting for 1m20s Oct 30 04:34:41.213: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:41.217: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:30 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:42.212: INFO: node status heartbeat is unchanged for 999.452598ms, waiting for 1m20s Oct 30 04:34:43.211: INFO: node status heartbeat is unchanged for 1.998096932s, waiting for 1m20s Oct 30 04:34:44.213: INFO: node status heartbeat is unchanged for 3.000056566s, waiting for 1m20s Oct 30 04:34:45.211: INFO: node status heartbeat is unchanged for 3.998423274s, waiting for 1m20s Oct 30 04:34:46.210: INFO: node status heartbeat is unchanged for 4.997668316s, waiting for 1m20s Oct 30 04:34:47.209: INFO: node status heartbeat is unchanged for 5.996348561s, waiting for 1m20s Oct 30 04:34:48.210: INFO: node status heartbeat is unchanged for 6.997342283s, waiting for 1m20s Oct 30 04:34:49.210: INFO: node status heartbeat is unchanged for 7.997762392s, waiting for 1m20s Oct 30 04:34:50.210: INFO: node status heartbeat is unchanged for 8.997363224s, waiting for 1m20s Oct 30 04:34:51.212: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 30 04:34:51.217: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:11:38 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:40 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-10-30 04:34:50 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-29 21:07:27 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-29 21:08:36 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } Oct 30 04:34:52.209: INFO: node status heartbeat is unchanged for 996.694542ms, waiting for 1m20s Oct 30 04:34:52.211: INFO: node status heartbeat is unchanged for 999.610782ms, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:34:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8413" for this suite. • [SLOW TEST:300.048 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":287,"failed":0} Oct 30 04:34:52.228: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:30:10.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Oct 30 04:30:10.334: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:12.338: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:14.339: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:16.340: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:18.338: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:20.341: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:30:22.343: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Oct 30 04:32:06.538: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-10-30 04:31:24 +0000 UTC restartedAt=2021-10-30 04:32:05 +0000 UTC (41s) STEP: getting restart delay-1 Oct 30 04:33:35.874: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-10-30 04:32:10 +0000 UTC restartedAt=2021-10-30 04:33:34 +0000 UTC (1m24s) STEP: getting restart delay-2 Oct 30 04:36:28.615: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-10-30 04:33:39 +0000 UTC restartedAt=2021-10-30 04:36:27 +0000 UTC (2m48s) STEP: updating the image Oct 30 04:36:29.126: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Oct 30 04:36:53.187: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-30 04:36:38 +0000 UTC restartedAt=2021-10-30 04:36:52 +0000 UTC (14s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:36:53.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3545" for this suite. • [SLOW TEST:402.893 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":6,"skipped":115,"failed":0} Oct 30 04:36:53.200: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:29:08.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Oct 30 04:29:08.400: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:10.405: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:12.405: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 30 04:29:14.404: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Oct 30 04:40:53.862: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-10-30 04:35:37 +0000 UTC restartedAt=2021-10-30 04:40:52 +0000 UTC (5m15s) Oct 30 04:46:03.164: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-30 04:40:57 +0000 UTC restartedAt=2021-10-30 04:46:01 +0000 UTC (5m4s) Oct 30 04:51:17.559: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-10-30 04:46:06 +0000 UTC restartedAt=2021-10-30 04:51:16 +0000 UTC (5m10s) STEP: getting restart delay after a capped delay Oct 30 04:56:36.943: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-10-30 04:51:21 +0000 UTC restartedAt=2021-10-30 04:56:36 +0000 UTC (5m15s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:56:36.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3666" for this suite. • [SLOW TEST:1648.586 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":274,"failed":0} Oct 30 04:56:36.954: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":335,"failed":0} Oct 30 04:30:56.801: INFO: Running AfterSuite actions on all nodes Oct 30 04:56:36.976: INFO: Running AfterSuite actions on node 1 Oct 30 04:56:36.976: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1657.633 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 27m39.152607384s Test Suite Failed