Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651879989 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 6 23:33:11.116: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.118: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 23:33:11.143: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 23:33:11.215: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 6 23:33:11.215: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 6 23:33:11.215: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 23:33:11.215: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 6 23:33:11.215: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 23:33:11.233: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 6 23:33:11.233: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 6 23:33:11.233: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 6 23:33:11.233: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 6 23:33:11.233: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 6 23:33:11.233: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 6 23:33:11.233: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 6 23:33:11.234: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 23:33:11.234: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 6 23:33:11.234: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 6 23:33:11.234: INFO: e2e test version: v1.21.9 May 6 23:33:11.235: INFO: kube-apiserver version: v1.21.1 May 6 23:33:11.235: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.242: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ May 6 23:33:11.245: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.266: INFO: Cluster IP family: ipv4 May 6 23:33:11.247: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.266: INFO: Cluster IP family: ipv4 S ------------------------------ May 6 23:33:11.246: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.268: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 6 23:33:11.250: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.270: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ May 6 23:33:11.254: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.275: INFO: Cluster IP family: ipv4 S ------------------------------ May 6 23:33:11.251: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.276: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 23:33:11.267: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.288: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 23:33:11.274: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.302: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ May 6 23:33:11.283: INFO: >>> kubeConfig: /root/.kube/config May 6 23:33:11.306: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0506 23:33:11.330364 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.330: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.333: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:11.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5347" for this suite. •SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0506 23:33:11.798476 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.798: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.800: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:11.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5855" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 23:33:11.565479 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.565: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.567: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E0506 23:33:15.592030 33 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 157 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x654af00, 0x9c066c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002282f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0019c1200, 0xc002282f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000e9b2c0, 0xc0019c1200, 0xc005059920, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000e9b2c0, 0xc0019c1200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000e9b2c0, 0xc0019c1200, 0xc000e9b2c0, 0xc0019c1200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0019c1200, 0x14, 0xc004c17e30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc0050514a0, 0xc000e9b038, 0x14, 0xc004c17e30, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0001aefc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0001aefc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001101ec0, 0x76a2fe0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003ca04b0, 0x0, 0x76a2fe0, 0xc000236800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003ca04b0, 0x76a2fe0, 0xc000236800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0047e0000, 0xc003ca04b0, 0x40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0047e0000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0047e0000, 0xc0047d6030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000174280, 0x7f011ffa2c88, 0xc0013dd800, 0x6f170c8, 0x14, 0xc003d6f200, 0x3, 0x3, 0x7759478, 0xc000236800, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x76a80c0, 0xc0013dd800, 0x6f170c8, 0x14, 0xc003a0d180, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x76a80c0, 0xc0013dd800, 0x6f170c8, 0x14, 0xc001b74ec0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0013dd800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0013dd800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0013dd800, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-6314". STEP: Found 2 events. May 6 23:33:15.595: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-5d5162dd-589a-46af-b0c0-33f4034289e7: { } Scheduled: Successfully assigned container-probe-6314/startup-5d5162dd-589a-46af-b0c0-33f4034289e7 to node2 May 6 23:33:15.596: INFO: At 2022-05-06 23:33:15 +0000 UTC - event for startup-5d5162dd-589a-46af-b0c0-33f4034289e7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" May 6 23:33:15.597: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:33:15.597: INFO: startup-5d5162dd-589a-46af-b0c0-33f4034289e7 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:33:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:33:11 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:33:11 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:33:11 +0000 UTC }] May 6 23:33:15.598: INFO: May 6 23:33:15.602: INFO: Logging node info for node master1 May 6 23:33:15.605: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 76929 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:06 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:06 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:06 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:33:06 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 23:33:15.606: INFO: Logging kubelet events for node master1 May 6 23:33:15.609: INFO: Logging pods the kubelet thinks is on node master1 May 6 23:33:15.636: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 23:33:15.636: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-multus ready: true, restart count 1 May 6 23:33:15.636: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:15.636: INFO: Container node-exporter ready: true, restart count 0 May 6 23:33:15.636: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-apiserver ready: true, restart count 0 May 6 23:33:15.636: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-proxy ready: true, restart count 2 May 6 23:33:15.636: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 23:33:15.636: INFO: Init container install-cni ready: true, restart count 0 May 6 23:33:15.636: INFO: Container kube-flannel ready: true, restart count 1 May 6 23:33:15.636: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container coredns ready: true, restart count 1 May 6 23:33:15.636: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.636: INFO: Container docker-registry ready: true, restart count 0 May 6 23:33:15.636: INFO: Container nginx ready: true, restart count 0 May 6 23:33:15.636: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.636: INFO: Container kube-scheduler ready: true, restart count 0 May 6 23:33:15.722: INFO: Latency metrics for node master1 May 6 23:33:15.722: INFO: Logging node info for node master2 May 6 23:33:15.726: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 77132 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:12 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:12 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:12 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:33:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 23:33:15.726: INFO: Logging kubelet events for node master2 May 6 23:33:15.729: INFO: Logging pods the kubelet thinks is on node master2 May 6 23:33:15.737: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.737: INFO: Container kube-scheduler ready: true, restart count 2 May 6 23:33:15.737: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.738: INFO: Container kube-apiserver ready: true, restart count 0 May 6 23:33:15.738: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 23:33:15.738: INFO: Init container install-cni ready: true, restart count 0 May 6 23:33:15.738: INFO: Container kube-flannel ready: true, restart count 1 May 6 23:33:15.738: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.738: INFO: Container kube-multus ready: true, restart count 1 May 6 23:33:15.738: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.738: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 23:33:15.738: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.738: INFO: Container kube-proxy ready: true, restart count 2 May 6 23:33:15.738: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.738: INFO: Container autoscaler ready: true, restart count 1 May 6 23:33:15.738: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.738: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:15.738: INFO: Container node-exporter ready: true, restart count 0 May 6 23:33:15.814: INFO: Latency metrics for node master2 May 6 23:33:15.814: INFO: Logging node info for node master3 May 6 23:33:15.816: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 76934 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 23:33:15.816: INFO: Logging kubelet events for node master3 May 6 23:33:15.818: INFO: Logging pods the kubelet thinks is on node master3 May 6 23:33:15.828: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 23:33:15.828: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-scheduler ready: true, restart count 2 May 6 23:33:15.828: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-proxy ready: true, restart count 2 May 6 23:33:15.828: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 23:33:15.828: INFO: Init container install-cni ready: true, restart count 2 May 6 23:33:15.828: INFO: Container kube-flannel ready: true, restart count 1 May 6 23:33:15.828: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:15.828: INFO: Container node-exporter ready: true, restart count 0 May 6 23:33:15.828: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-apiserver ready: true, restart count 0 May 6 23:33:15.828: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container kube-multus ready: true, restart count 1 May 6 23:33:15.828: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container coredns ready: true, restart count 1 May 6 23:33:15.828: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.828: INFO: Container nfd-controller ready: true, restart count 0 May 6 23:33:15.909: INFO: Latency metrics for node master3 May 6 23:33:15.909: INFO: Logging node info for node node1 May 6 23:33:15.926: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 77003 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:27:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-06 23:33:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:10 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:10 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:10 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:33:10 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:859ab6768a6f26a79bc42b231664111317d095a4f04e4b6fe79ce37b3d199097 nginx:latest],SizeBytes:141522124,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 23:33:15.927: INFO: Logging kubelet events for node node1 May 6 23:33:15.928: INFO: Logging pods the kubelet thinks is on node node1 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-vmmdm started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 ready: false, restart count 0 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-mht52 started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-cs24v started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 ready: false, restart count 0 May 6 23:33:15.946: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.946: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:15.946: INFO: Container prometheus-operator ready: true, restart count 0 May 6 23:33:15.946: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 23:33:15.946: INFO: Container config-reloader ready: true, restart count 0 May 6 23:33:15.946: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 23:33:15.946: INFO: Container grafana ready: true, restart count 0 May 6 23:33:15.946: INFO: Container prometheus ready: true, restart count 1 May 6 23:33:15.946: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.946: INFO: Container nodereport ready: true, restart count 0 May 6 23:33:15.946: INFO: Container reconcile ready: true, restart count 0 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-rh8gx started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 23:33:15.946: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 23:33:15.946: INFO: Container discover ready: false, restart count 0 May 6 23:33:15.946: INFO: Container init ready: false, restart count 0 May 6 23:33:15.946: INFO: Container install ready: false, restart count 0 May 6 23:33:15.946: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 23:33:15.946: INFO: Init container install-cni ready: true, restart count 2 May 6 23:33:15.946: INFO: Container kube-flannel ready: true, restart count 3 May 6 23:33:15.946: INFO: pod-back-off-image started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container back-off ready: false, restart count 0 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-6w9bq started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container nginx-proxy ready: true, restart count 2 May 6 23:33:15.946: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container kube-proxy ready: true, restart count 2 May 6 23:33:15.946: INFO: startup-5e968877-bc0f-4436-a633-44df7eb0a1b6 started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container test-container ready: false, restart count 0 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-52wjw started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-jprgb started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 ready: false, restart count 0 May 6 23:33:15.946: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container nfd-worker ready: true, restart count 0 May 6 23:33:15.946: INFO: startup-override-5e5b9a39-5d32-436b-a482-875f49f608f2 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container agnhost-container ready: false, restart count 0 May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-64m7f started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-b5wqx started at (0+0 container statuses recorded) May 6 23:33:15.946: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 23:33:15.946: INFO: Container collectd ready: true, restart count 0 May 6 23:33:15.946: INFO: Container collectd-exporter ready: true, restart count 0 May 6 23:33:15.946: INFO: Container rbac-proxy ready: true, restart count 0 May 6 23:33:15.946: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 23:33:15.946: INFO: Container kube-multus ready: true, restart count 1 May 6 23:33:15.946: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 23:33:15.946: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:15.946: INFO: Container node-exporter ready: true, restart count 0 May 6 23:33:17.750: INFO: Latency metrics for node node1 May 6 23:33:17.750: INFO: Logging node info for node node2 May 6 23:33:17.752: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 77004 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:28:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-06 23:33:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}},"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:33:09 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 23:33:17.753: INFO: Logging kubelet events for node node2 May 6 23:33:17.756: INFO: Logging pods the kubelet thinks is on node node2 May 6 23:33:18.094: INFO: pod-submit-status-0-0 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container busybox ready: false, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-dgp2c started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-tl2x8 started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-zthkq started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container nfd-worker ready: true, restart count 0 May 6 23:33:18.094: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container kube-multus ready: true, restart count 1 May 6 23:33:18.094: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 23:33:18.094: INFO: Container nodereport ready: true, restart count 0 May 6 23:33:18.094: INFO: Container reconcile ready: true, restart count 0 May 6 23:33:18.094: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container tas-extender ready: true, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-g492n started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container kube-proxy ready: true, restart count 2 May 6 23:33:18.094: INFO: startup-5d5162dd-589a-46af-b0c0-33f4034289e7 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container busybox ready: false, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-lzgft started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 ready: false, restart count 0 May 6 23:33:18.094: INFO: back-off-cap started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: pod-submit-status-1-0 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container busybox ready: false, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-k5pr4 started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container busybox ready: false, restart count 0 May 6 23:33:18.094: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 23:33:18.094: INFO: Container collectd ready: true, restart count 0 May 6 23:33:18.094: INFO: Container collectd-exporter ready: true, restart count 0 May 6 23:33:18.094: INFO: Container rbac-proxy ready: true, restart count 0 May 6 23:33:18.094: INFO: without-label started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-k4ksx started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 23:33:18.094: INFO: Container discover ready: false, restart count 0 May 6 23:33:18.094: INFO: Container init ready: false, restart count 0 May 6 23:33:18.094: INFO: Container install ready: false, restart count 0 May 6 23:33:18.094: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 23:33:18.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 23:33:18.094: INFO: Container node-exporter ready: true, restart count 0 May 6 23:33:18.094: INFO: pod-submit-status-2-0 started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container busybox ready: false, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-np9nf started at 2022-05-06 23:33:11 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 ready: false, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-tq9q2 started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-9ppm2 started at (0+0 container statuses recorded) May 6 23:33:18.094: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container nginx-proxy ready: true, restart count 2 May 6 23:33:18.094: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 23:33:18.094: INFO: Init container install-cni ready: true, restart count 1 May 6 23:33:18.094: INFO: Container kube-flannel ready: true, restart count 2 May 6 23:33:18.094: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 23:33:18.094: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 23:33:18.094: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 23:33:18.094: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 23:33:18.094: INFO: Container cmk-webhook ready: true, restart count 0 May 6 23:33:18.094: INFO: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2-9p2np started at (0+0 container statuses recorded) May 6 23:33:19.208: INFO: Latency metrics for node node2 May 6 23:33:19.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6314" for this suite. •! Panic [7.677 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc002282f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0019c1200, 0xc002282f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000e9b2c0, 0xc0019c1200, 0xc005059920, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000e9b2c0, 0xc0019c1200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000e9b2c0, 0xc0019c1200, 0xc000e9b2c0, 0xc0019c1200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0019c1200, 0x14, 0xc004c17e30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc0050514a0, 0xc000e9b038, 0x14, 0xc004c17e30, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0013dd800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0013dd800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0013dd800, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 6 23:33:11.522: INFO: Waiting up to 5m0s for pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9" in namespace "security-context-789" to be "Succeeded or Failed" May 6 23:33:11.525: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681414ms May 6 23:33:13.528: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006001733s May 6 23:33:15.533: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010265952s May 6 23:33:17.537: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014561643s May 6 23:33:19.540: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018105631s May 6 23:33:21.544: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021815981s STEP: Saw pod success May 6 23:33:21.544: INFO: Pod "security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9" satisfied condition "Succeeded or Failed" May 6 23:33:21.546: INFO: Trying to get logs from node node1 pod security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9 container test-container: STEP: delete the pod May 6 23:33:21.816: INFO: Waiting for pod security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9 to disappear May 6 23:33:21.818: INFO: Pod security-context-5452ec14-54ed-4563-bd52-7121ac2e96b9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:21.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-789" for this suite. • [SLOW TEST:10.336 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":2,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:19.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:29.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-357" for this suite. • [SLOW TEST:10.045 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 23:33:11.526455 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.526: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.528: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-5e5b9a39-5d32-436b-a482-875f49f608f2 in namespace container-probe-8658 May 6 23:33:25.548: INFO: Started pod startup-override-5e5b9a39-5d32-436b-a482-875f49f608f2 in namespace container-probe-8658 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:33:25.552: INFO: Initial restart count of pod startup-override-5e5b9a39-5d32-436b-a482-875f49f608f2 is 0 May 6 23:33:33.568: INFO: Restart count of pod container-probe-8658/startup-override-5e5b9a39-5d32-436b-a482-875f49f608f2 is now 1 (8.016119859s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8658" for this suite. • [SLOW TEST:22.081 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:30.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 6 23:33:30.082: INFO: Waiting up to 5m0s for pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3" in namespace "security-context-3838" to be "Succeeded or Failed" May 6 23:33:30.084: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106816ms May 6 23:33:32.088: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005573113s May 6 23:33:34.092: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010028775s May 6 23:33:36.095: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012446027s May 6 23:33:38.099: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017029579s May 6 23:33:40.103: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020169249s STEP: Saw pod success May 6 23:33:40.103: INFO: Pod "security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3" satisfied condition "Succeeded or Failed" May 6 23:33:40.105: INFO: Trying to get logs from node node1 pod security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3 container test-container: STEP: delete the pod May 6 23:33:40.118: INFO: Waiting for pod security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3 to disappear May 6 23:33:40.120: INFO: Pod security-context-46fc4dac-796c-4111-95cf-9f6f5b87a5f3 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:40.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3838" for this suite. • [SLOW TEST:10.080 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-c3c51572-9635-4b70-94a3-f74bbcd5b35b bar STEP: verifying the node has the label fizz-3b7daec9-cd2f-409d-a7ce-ef58da4d6485 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-3b7daec9-cd2f-409d-a7ce-ef58da4d6485 off the node node2 STEP: verifying the node doesn't have the label fizz-3b7daec9-cd2f-409d-a7ce-ef58da4d6485 STEP: removing the label foo-c3c51572-9635-4b70-94a3-f74bbcd5b35b off the node node2 STEP: verifying the node doesn't have the label foo-c3c51572-9635-4b70-94a3-f74bbcd5b35b [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:41.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4713" for this suite. • [SLOW TEST:30.125 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":2,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:22.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-107dce8e-5304-45c5-8dc0-cc1905264684 in namespace container-probe-3741 May 6 23:33:40.057: INFO: Started pod liveness-107dce8e-5304-45c5-8dc0-cc1905264684 in namespace container-probe-3741 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:33:40.059: INFO: Initial restart count of pod liveness-107dce8e-5304-45c5-8dc0-cc1905264684 is 0 May 6 23:33:44.071: INFO: Restart count of pod container-probe-3741/liveness-107dce8e-5304-45c5-8dc0-cc1905264684 is now 1 (4.011218996s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:44.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3741" for this suite. • [SLOW TEST:22.070 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:40.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:48.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2059" for this suite. • [SLOW TEST:8.088 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:48.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 May 6 23:33:48.511: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:48.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-1138" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:48.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 May 6 23:33:48.583: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:48.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-6161" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:48.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 6 23:33:48.790: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:33:48.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3969" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W0506 23:33:11.655269 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.655: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.657: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 in namespace kubelet-1742 I0506 23:33:11.691436 26 runners.go:190] Created replication controller with name: cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2, namespace: kubelet-1742, replica count: 20 I0506 23:33:21.742921 26 runners.go:190] cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:33:31.743727 26 runners.go:190] cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 Pods: 20 out of 20 created, 9 running, 11 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:33:41.745339 26 runners.go:190] cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 23:33:42.746: INFO: Checking pods on node node1 via /runningpods endpoint May 6 23:33:42.746: INFO: Checking pods on node node2 via /runningpods endpoint May 6 23:33:42.827: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.105 598.07 258.38 "kubelet" 0.105 598.07 258.38 "/" 0.326 3570.69 1533.78 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.438 3814.13 1701.46 "runtime" 0.099 567.30 266.27 "kubelet" 0.099 567.30 266.27 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 2.669 6413.13 2355.57 "runtime" 1.215 2640.58 562.91 "kubelet" 1.215 2640.58 562.91 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.899 4058.37 1268.80 "runtime" 1.238 1479.63 576.80 "kubelet" 1.238 1479.63 576.80 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.387 4791.37 1634.63 "runtime" 0.130 677.86 302.31 "kubelet" 0.130 677.86 302.31 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 in namespace kubelet-1742, will wait for the garbage collector to delete the pods May 6 23:33:42.883: INFO: Deleting ReplicationController cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 took: 3.693587ms May 6 23:33:43.483: INFO: Terminating ReplicationController cleanup20-d9268741-faae-467a-98b4-3e12e4c416c2 pods took: 600.260083ms May 6 23:34:01.085: INFO: Checking pods on node node1 via /runningpods endpoint May 6 23:34:01.085: INFO: Checking pods on node node2 via /runningpods endpoint May 6 23:34:01.101: INFO: Deleting 20 pods on 2 nodes completed in 1.016570844s after the RC was deleted May 6 23:34:01.101: INFO: CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.288 0.294 0.318 0.324 0.324 0.324 "runtime" 0.000 0.000 0.084 0.095 0.095 0.095 0.095 "kubelet" 0.000 0.000 0.084 0.095 0.095 0.095 0.095 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.438 0.438 0.564 0.564 0.564 "runtime" 0.000 0.000 0.094 0.099 0.099 0.099 0.099 "kubelet" 0.000 0.000 0.094 0.099 0.099 0.099 0.099 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 2.060 2.060 2.132 2.132 2.132 "runtime" 0.000 0.000 0.098 1.052 1.052 1.052 1.052 "kubelet" 0.000 0.000 0.098 1.052 1.052 1.052 1.052 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.480 0.867 0.899 1.629 1.629 1.629 "runtime" 0.000 0.000 0.841 0.884 0.884 0.884 0.884 "kubelet" 0.000 0.000 0.841 0.884 0.884 0.884 0.884 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.310 0.363 0.369 0.387 0.387 0.387 "runtime" 0.000 0.000 0.116 0.116 0.124 0.124 0.124 "kubelet" 0.000 0.000 0.116 0.116 0.124 0.124 0.124 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:01.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-1742" for this suite. • [SLOW TEST:49.499 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:01.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:05.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9141" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:48.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 6 23:34:05.877: INFO: start=2022-05-06 23:34:00.850827095 +0000 UTC m=+51.365114385, now=2022-05-06 23:34:05.877307019 +0000 UTC m=+56.391594324, kubelet pod: {"metadata":{"name":"pod-submit-remove-05a1d199-ad5a-4958-bc51-8a28e4ef19e7","namespace":"pods-5045","uid":"cd2bcdb0-b35e-4e23-842f-80faf9a0a7a0","resourceVersion":"77897","creationTimestamp":"2022-05-06T23:33:48Z","deletionTimestamp":"2022-05-06T23:34:30Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"821186483"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.243\"\n ],\n \"mac\": \"c6:a3:ac:97:50:d4\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.243\"\n ],\n \"mac\": \"c6:a3:ac:97:50:d4\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-05-06T23:33:48.837646394Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-05-06T23:33:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-hmm7x","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-hmm7x","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-06T23:33:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-06T23:34:03Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-06T23:34:03Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-06T23:33:48Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.243","podIPs":[{"ip":"10.244.3.243"}],"startTime":"2022-05-06T23:33:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"","started":false}],"qosClass":"BestEffort"}} May 6 23:34:10.982: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:10.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5045" for this suite. • [SLOW TEST:22.193 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":4,"skipped":712,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:05.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 May 6 23:34:05.764: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26" in namespace "security-context-test-9159" to be "Succeeded or Failed" May 6 23:34:05.766: INFO: Pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26": Phase="Pending", Reason="", readiness=false. Elapsed: 1.813096ms May 6 23:34:07.769: INFO: Pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005529795s May 6 23:34:09.773: INFO: Pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009599926s May 6 23:34:11.778: INFO: Pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26": Phase="Failed", Reason="", readiness=false. Elapsed: 6.014308831s May 6 23:34:11.778: INFO: Pod "busybox-readonly-true-1953cba2-b317-4993-9260-90e59b211b26" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:11.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9159" for this suite. • [SLOW TEST:6.067 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:12.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:12.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3579" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":4,"skipped":599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:42.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:12.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8359" for this suite. • [SLOW TEST:30.085 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":3,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 23:33:11.702743 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.703: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.704: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 in namespace container-probe-3888 May 6 23:33:27.740: INFO: Started pod busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 in namespace container-probe-3888 May 6 23:33:27.740: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (1.322µs elapsed) May 6 23:33:29.741: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (2.000523384s elapsed) May 6 23:33:31.743: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (4.002909202s elapsed) May 6 23:33:33.745: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (6.004171367s elapsed) May 6 23:33:35.749: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (8.008291401s elapsed) May 6 23:33:37.750: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (10.009966427s elapsed) May 6 23:33:39.751: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (12.01024153s elapsed) May 6 23:33:41.752: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (14.011409205s elapsed) May 6 23:33:43.757: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (16.016311818s elapsed) May 6 23:33:45.758: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (18.017941209s elapsed) May 6 23:33:47.761: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (20.020533129s elapsed) May 6 23:33:49.762: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (22.021112076s elapsed) May 6 23:33:51.764: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (24.023289662s elapsed) May 6 23:33:53.765: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (26.025024398s elapsed) May 6 23:33:55.770: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (28.029202773s elapsed) May 6 23:33:57.770: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (30.029684176s elapsed) May 6 23:33:59.771: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (32.030360459s elapsed) May 6 23:34:01.771: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (34.030800003s elapsed) May 6 23:34:03.772: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (36.032025259s elapsed) May 6 23:34:05.773: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (38.032652276s elapsed) May 6 23:34:07.775: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (40.034139116s elapsed) May 6 23:34:09.776: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (42.035327607s elapsed) May 6 23:34:11.776: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (44.03581926s elapsed) May 6 23:34:13.778: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (46.037870298s elapsed) May 6 23:34:15.780: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (48.039213087s elapsed) May 6 23:34:17.783: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (50.042252644s elapsed) May 6 23:34:19.783: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (52.042554105s elapsed) May 6 23:34:21.784: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (54.044019893s elapsed) May 6 23:34:23.789: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (56.048251278s elapsed) May 6 23:34:25.791: INFO: pod container-probe-3888/busybox-47a13dda-82e2-4ebc-84ca-2b555e3254f9 is not ready (58.050682287s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:27.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3888" for this suite. • [SLOW TEST:76.131 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":1,"skipped":115,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:12.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 23:33:12.449589 44 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:12.449: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:12.451: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-5e968877-bc0f-4436-a633-44df7eb0a1b6 in namespace container-probe-3545 May 6 23:33:32.471: INFO: Started pod startup-5e968877-bc0f-4436-a633-44df7eb0a1b6 in namespace container-probe-3545 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:33:32.474: INFO: Initial restart count of pod startup-5e968877-bc0f-4436-a633-44df7eb0a1b6 is 0 May 6 23:34:34.616: INFO: Restart count of pod container-probe-3545/startup-5e968877-bc0f-4436-a633-44df7eb0a1b6 is now 1 (1m2.141639295s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3545" for this suite. • [SLOW TEST:82.209 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:35.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-388/configmap-test-a485b7b8-8be1-427b-9ec4-ef973c4b0757 STEP: Updating configMap configmap-388/configmap-test-a485b7b8-8be1-427b-9ec4-ef973c4b0757 STEP: Verifying update of ConfigMap configmap-388/configmap-test-a485b7b8-8be1-427b-9ec4-ef973c4b0757 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:35.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-388" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":2,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:35.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-ba80aea5-bb56-4263-9460-81678acb71f6 in namespace container-probe-4365 May 6 23:34:39.189: INFO: Started pod liveness-override-ba80aea5-bb56-4263-9460-81678acb71f6 in namespace container-probe-4365 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:34:39.191: INFO: Initial restart count of pod liveness-override-ba80aea5-bb56-4263-9460-81678acb71f6 is 0 May 6 23:34:41.198: INFO: Restart count of pod container-probe-4365/liveness-override-ba80aea5-bb56-4263-9460-81678acb71f6 is now 1 (2.006628029s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:41.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4365" for this suite. • [SLOW TEST:6.066 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":3,"skipped":792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:41.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 6 23:34:41.358: INFO: Waiting up to 5m0s for pod "security-context-bda4ee9c-5330-4008-aa58-a8601883e278" in namespace "security-context-8857" to be "Succeeded or Failed" May 6 23:34:41.361: INFO: Pod "security-context-bda4ee9c-5330-4008-aa58-a8601883e278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364096ms May 6 23:34:43.365: INFO: Pod "security-context-bda4ee9c-5330-4008-aa58-a8601883e278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006739638s May 6 23:34:45.369: INFO: Pod "security-context-bda4ee9c-5330-4008-aa58-a8601883e278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010491554s STEP: Saw pod success May 6 23:34:45.369: INFO: Pod "security-context-bda4ee9c-5330-4008-aa58-a8601883e278" satisfied condition "Succeeded or Failed" May 6 23:34:45.371: INFO: Trying to get logs from node node1 pod security-context-bda4ee9c-5330-4008-aa58-a8601883e278 container test-container: STEP: delete the pod May 6 23:34:45.385: INFO: Waiting for pod security-context-bda4ee9c-5330-4008-aa58-a8601883e278 to disappear May 6 23:34:45.387: INFO: Pod security-context-bda4ee9c-5330-4008-aa58-a8601883e278 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:45.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8857" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":4,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:44.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-e70831b0-a88f-4de0-9aca-9826b4cf64c4 in namespace container-probe-3414 May 6 23:33:56.283: INFO: Started pod busybox-e70831b0-a88f-4de0-9aca-9826b4cf64c4 in namespace container-probe-3414 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:33:56.286: INFO: Initial restart count of pod busybox-e70831b0-a88f-4de0-9aca-9826b4cf64c4 is 0 May 6 23:34:46.410: INFO: Restart count of pod container-probe-3414/busybox-e70831b0-a88f-4de0-9aca-9826b4cf64c4 is now 1 (50.124729672s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:46.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3414" for this suite. • [SLOW TEST:62.184 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:45.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:49.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-206" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:46.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:50.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3292" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:49.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 6 23:34:49.866: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod May 6 23:34:49.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5655 create -f -' May 6 23:34:50.351: INFO: stderr: "" May 6 23:34:50.351: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 6 23:34:56.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5655 logs dapi-test-pod test-container' May 6 23:34:56.549: INFO: stderr: "" May 6 23:34:56.549: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5655\nMY_POD_IP=10.244.3.9\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 6 23:34:56.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5655 logs dapi-test-pod test-container' May 6 23:34:56.796: INFO: stderr: "" May 6 23:34:56.796: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5655\nMY_POD_IP=10.244.3.9\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:56.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5655" for this suite. • [SLOW TEST:6.976 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":6,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:50.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 May 6 23:34:51.014: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-5866" to be "Succeeded or Failed" May 6 23:34:51.017: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 3.188366ms May 6 23:34:53.020: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006544343s May 6 23:34:55.025: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010745683s May 6 23:34:57.028: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013797142s May 6 23:34:57.028: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:34:57.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5866" for this suite. • [SLOW TEST:6.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":486,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:56.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 6 23:34:56.938: INFO: Waiting up to 5m0s for pod "security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac" in namespace "security-context-6076" to be "Succeeded or Failed" May 6 23:34:56.940: INFO: Pod "security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187554ms May 6 23:34:58.943: INFO: Pod "security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005611789s May 6 23:35:00.947: INFO: Pod "security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009132745s STEP: Saw pod success May 6 23:35:00.947: INFO: Pod "security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac" satisfied condition "Succeeded or Failed" May 6 23:35:00.950: INFO: Trying to get logs from node node2 pod security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac container test-container: STEP: delete the pod May 6 23:35:00.965: INFO: Waiting for pod security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac to disappear May 6 23:35:00.967: INFO: Pod security-context-bc625aaf-ce5b-46f6-bac2-d43f0c302fac no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:00.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6076" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":7,"skipped":1082,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:57.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container May 6 23:34:57.111: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 6 23:34:59.114: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:01.117: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:03.117: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container May 6 23:35:03.119: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-923 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:03.119: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:03.215: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-923 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:03.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 6 23:35:03.456: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-923 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:03.456: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:03.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-923" for this suite. • [SLOW TEST:6.481 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":7,"skipped":499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:03.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 May 6 23:35:03.633: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:03.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8808" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:01.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 May 6 23:35:01.061: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-3fd2403e-cdf4-4f18-948f-f3632c166404" in namespace "security-context-test-3175" to be "Succeeded or Failed" May 6 23:35:01.063: INFO: Pod "alpine-nnp-nil-3fd2403e-cdf4-4f18-948f-f3632c166404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00646ms May 6 23:35:03.067: INFO: Pod "alpine-nnp-nil-3fd2403e-cdf4-4f18-948f-f3632c166404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005579953s May 6 23:35:05.071: INFO: Pod "alpine-nnp-nil-3fd2403e-cdf4-4f18-948f-f3632c166404": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009585198s May 6 23:35:05.071: INFO: Pod "alpine-nnp-nil-3fd2403e-cdf4-4f18-948f-f3632c166404" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:05.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3175" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":1104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:05.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 23:35:08.425: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:08.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5619" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":9,"skipped":1243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:08.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups May 6 23:35:08.558: INFO: Waiting up to 5m0s for pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce" in namespace "security-context-5702" to be "Succeeded or Failed" May 6 23:35:08.560: INFO: Pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143265ms May 6 23:35:10.565: INFO: Pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006769025s May 6 23:35:12.569: INFO: Pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011670808s May 6 23:35:14.573: INFO: Pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015140208s STEP: Saw pod success May 6 23:35:14.573: INFO: Pod "security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce" satisfied condition "Succeeded or Failed" May 6 23:35:14.576: INFO: Trying to get logs from node node2 pod security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce container test-container: STEP: delete the pod May 6 23:35:14.589: INFO: Waiting for pod security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce to disappear May 6 23:35:14.590: INFO: Pod security-context-5c7677f3-641f-4242-b42d-0f43d8c458ce no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:14.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5702" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":10,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:12.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-cef56d4a-2eb7-4f6d-b9cd-d7b4682de862 in namespace container-probe-3851 May 6 23:34:20.929: INFO: Started pod startup-cef56d4a-2eb7-4f6d-b9cd-d7b4682de862 in namespace container-probe-3851 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:34:20.931: INFO: Initial restart count of pod startup-cef56d4a-2eb7-4f6d-b9cd-d7b4682de862 is 0 May 6 23:35:15.067: INFO: Restart count of pod container-probe-3851/startup-cef56d4a-2eb7-4f6d-b9cd-d7b4682de862 is now 1 (54.135555978s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:15.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3851" for this suite. • [SLOW TEST:62.197 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":4,"skipped":609,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:15.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 May 6 23:35:15.135: INFO: Waiting up to 5m0s for pod "busybox-user-0-ea7fe4bc-55cc-414f-b8a8-fa970b7bbd83" in namespace "security-context-test-5920" to be "Succeeded or Failed" May 6 23:35:15.137: INFO: Pod "busybox-user-0-ea7fe4bc-55cc-414f-b8a8-fa970b7bbd83": Phase="Pending", Reason="", readiness=false. Elapsed: 1.807738ms May 6 23:35:17.142: INFO: Pod "busybox-user-0-ea7fe4bc-55cc-414f-b8a8-fa970b7bbd83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006349494s May 6 23:35:19.146: INFO: Pod "busybox-user-0-ea7fe4bc-55cc-414f-b8a8-fa970b7bbd83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010772256s May 6 23:35:19.146: INFO: Pod "busybox-user-0-ea7fe4bc-55cc-414f-b8a8-fa970b7bbd83" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:19.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5920" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":619,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:14.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod May 6 23:35:14.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9862 create -f -' May 6 23:35:15.346: INFO: stderr: "" May 6 23:35:15.346: INFO: stdout: "secret/test-secret created\n" May 6 23:35:15.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9862 create -f -' May 6 23:35:15.702: INFO: stderr: "" May 6 23:35:15.702: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 6 23:35:19.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9862 logs secret-test-pod test-container' May 6 23:35:19.895: INFO: stderr: "" May 6 23:35:19.895: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9862" for this suite. • [SLOW TEST:5.041 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":11,"skipped":1421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:27.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-a4577115-bb4c-499d-b173-58634e2edd86 in namespace container-probe-6457 May 6 23:34:31.879: INFO: Started pod busybox-a4577115-bb4c-499d-b173-58634e2edd86 in namespace container-probe-6457 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:34:31.882: INFO: Initial restart count of pod busybox-a4577115-bb4c-499d-b173-58634e2edd86 is 0 May 6 23:35:21.988: INFO: Restart count of pod container-probe-6457/busybox-a4577115-bb4c-499d-b173-58634e2edd86 is now 1 (50.105840376s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:21.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6457" for this suite. • [SLOW TEST:54.165 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:19.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 6 23:35:19.214: INFO: Waiting up to 5m0s for pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264" in namespace "security-context-1944" to be "Succeeded or Failed" May 6 23:35:19.217: INFO: Pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912249ms May 6 23:35:21.220: INFO: Pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006014109s May 6 23:35:23.226: INFO: Pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011612408s May 6 23:35:25.229: INFO: Pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014643784s STEP: Saw pod success May 6 23:35:25.229: INFO: Pod "security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264" satisfied condition "Succeeded or Failed" May 6 23:35:25.232: INFO: Trying to get logs from node node2 pod security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264 container test-container: STEP: delete the pod May 6 23:35:25.256: INFO: Waiting for pod security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264 to disappear May 6 23:35:25.262: INFO: Pod security-context-19ed65e9-60cc-4cdc-b5fe-3198ba30d264 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:25.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1944" for this suite. • [SLOW TEST:6.097 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":627,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:25.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 6 23:35:25.325: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:25.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-2040" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:12.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 6 23:34:12.523: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 May 6 23:34:12.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-629 create -f -' May 6 23:34:13.063: INFO: stderr: "" May 6 23:34:13.063: INFO: stdout: "pod/liveness-exec created\n" May 6 23:34:13.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-629 create -f -' May 6 23:34:13.427: INFO: stderr: "" May 6 23:34:13.427: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 6 23:34:19.437: INFO: Pod: liveness-http, restart count:0 May 6 23:34:19.437: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:21.440: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:21.441: INFO: Pod: liveness-http, restart count:0 May 6 23:34:23.448: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:23.448: INFO: Pod: liveness-http, restart count:0 May 6 23:34:25.452: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:25.452: INFO: Pod: liveness-http, restart count:0 May 6 23:34:27.456: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:27.456: INFO: Pod: liveness-http, restart count:0 May 6 23:34:29.460: INFO: Pod: liveness-http, restart count:0 May 6 23:34:29.460: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:31.463: INFO: Pod: liveness-http, restart count:0 May 6 23:34:31.463: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:33.468: INFO: Pod: liveness-http, restart count:0 May 6 23:34:33.468: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:35.471: INFO: Pod: liveness-http, restart count:0 May 6 23:34:35.471: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:37.478: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:37.478: INFO: Pod: liveness-http, restart count:0 May 6 23:34:39.481: INFO: Pod: liveness-http, restart count:0 May 6 23:34:39.481: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:41.484: INFO: Pod: liveness-http, restart count:0 May 6 23:34:41.484: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:43.489: INFO: Pod: liveness-http, restart count:0 May 6 23:34:43.489: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:45.495: INFO: Pod: liveness-http, restart count:0 May 6 23:34:45.495: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:47.500: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:47.501: INFO: Pod: liveness-http, restart count:0 May 6 23:34:49.503: INFO: Pod: liveness-http, restart count:0 May 6 23:34:49.503: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:51.506: INFO: Pod: liveness-http, restart count:0 May 6 23:34:51.506: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:53.511: INFO: Pod: liveness-http, restart count:0 May 6 23:34:53.512: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:55.514: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:55.514: INFO: Pod: liveness-http, restart count:1 May 6 23:34:55.514: INFO: Saw liveness-http restart, succeeded... May 6 23:34:57.519: INFO: Pod: liveness-exec, restart count:0 May 6 23:34:59.523: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:01.526: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:03.530: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:05.534: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:07.538: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:09.543: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:11.547: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:13.551: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:15.554: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:17.559: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:19.563: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:21.567: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:23.575: INFO: Pod: liveness-exec, restart count:0 May 6 23:35:25.581: INFO: Pod: liveness-exec, restart count:1 May 6 23:35:25.582: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:25.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-629" for this suite. • [SLOW TEST:73.097 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":5,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:20.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:26.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7801" for this suite. • [SLOW TEST:6.085 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":12,"skipped":1482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:04.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination May 6 23:35:28.110: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:28.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4632" for this suite. • [SLOW TEST:24.081 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":8,"skipped":724,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:22.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 May 6 23:35:22.321: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07" in namespace "security-context-test-8221" to be "Succeeded or Failed" May 6 23:35:22.323: INFO: Pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132426ms May 6 23:35:24.327: INFO: Pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006615715s May 6 23:35:26.331: INFO: Pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00991064s May 6 23:35:28.335: INFO: Pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014651526s May 6 23:35:28.335: INFO: Pod "alpine-nnp-true-334603bf-fa62-49f4-8f71-0c739b41eb07" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:28.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8221" for this suite. • [SLOW TEST:6.065 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:26.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 May 6 23:35:26.166: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e" in namespace "security-context-test-247" to be "Succeeded or Failed" May 6 23:35:26.168: INFO: Pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091531ms May 6 23:35:28.174: INFO: Pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008147975s May 6 23:35:30.179: INFO: Pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012542921s May 6 23:35:30.179: INFO: Pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e" satisfied condition "Succeeded or Failed" May 6 23:35:30.185: INFO: Got logs for pod "busybox-privileged-true-f0d12fa9-ec89-49a8-92c1-bc02a7fed47e": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:30.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-247" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":13,"skipped":1487,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:28.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars May 6 23:35:28.476: INFO: Waiting up to 5m0s for pod "downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6" in namespace "downward-api-3002" to be "Succeeded or Failed" May 6 23:35:28.483: INFO: Pod "downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.303761ms May 6 23:35:30.487: INFO: Pod "downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010993657s May 6 23:35:32.491: INFO: Pod "downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014745726s STEP: Saw pod success May 6 23:35:32.491: INFO: Pod "downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6" satisfied condition "Succeeded or Failed" May 6 23:35:32.493: INFO: Trying to get logs from node node1 pod downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6 container dapi-container: STEP: delete the pod May 6 23:35:32.737: INFO: Waiting for pod downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6 to disappear May 6 23:35:32.739: INFO: Pod downward-api-380066d8-1285-4626-88e5-db7f2ee89ef6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:32.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3002" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:30.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 May 6 23:35:30.235: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3646" to be "Succeeded or Failed" May 6 23:35:30.238: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623059ms May 6 23:35:32.241: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005817739s May 6 23:35:34.245: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00961657s May 6 23:35:34.245: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:34.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3646" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":14,"skipped":1490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:33.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:35.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6735" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 23:35:35.253: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:28.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes May 6 23:35:28.167: INFO: Waiting up to 5m0s for pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9" in namespace "pods-4671" to be "Succeeded or Failed" May 6 23:35:28.169: INFO: Pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15986ms May 6 23:35:30.173: INFO: Pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006012138s May 6 23:35:32.180: INFO: Pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013627933s May 6 23:35:34.186: INFO: Pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019267665s STEP: Saw pod success May 6 23:35:34.186: INFO: Pod "pod-always-succeed92d40e1f-4859-4583-8e40-74ef6fa6b6a9" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4671" for this suite. • [SLOW TEST:8.074 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:34.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 6 23:35:34.549: INFO: Waiting up to 5m0s for pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da" in namespace "security-context-2578" to be "Succeeded or Failed" May 6 23:35:34.551: INFO: Pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da": Phase="Pending", Reason="", readiness=false. Elapsed: 1.944086ms May 6 23:35:36.554: INFO: Pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005085672s May 6 23:35:38.558: INFO: Pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008922106s May 6 23:35:40.561: INFO: Pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012306538s STEP: Saw pod success May 6 23:35:40.562: INFO: Pod "security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da" satisfied condition "Succeeded or Failed" May 6 23:35:40.564: INFO: Trying to get logs from node node2 pod security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da container test-container: STEP: delete the pod May 6 23:35:40.747: INFO: Waiting for pod security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da to disappear May 6 23:35:40.749: INFO: Pod security-context-1aab4bd1-4487-4356-bbe6-37fdd70f66da no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:40.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2578" for this suite. • [SLOW TEST:6.244 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":15,"skipped":1625,"failed":0} May 6 23:35:40.759: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:25.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 May 6 23:35:25.530: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:27.534: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:29.534: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:31.534: INFO: The status of Pod master is Running (Ready = true) May 6 23:35:31.549: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:33.554: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:35.554: INFO: The status of Pod slave is Running (Ready = true) May 6 23:35:35.571: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:37.575: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:39.575: INFO: The status of Pod private is Running (Ready = true) May 6 23:35:39.590: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:41.594: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 6 23:35:43.595: INFO: The status of Pod default is Running (Ready = true) May 6 23:35:43.601: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:43.601: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:43.685: INFO: Exec stderr: "" May 6 23:35:43.689: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:43.689: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:43.766: INFO: Exec stderr: "" May 6 23:35:43.770: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:43.770: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:43.852: INFO: Exec stderr: "" May 6 23:35:43.855: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:43.855: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:43.940: INFO: Exec stderr: "" May 6 23:35:43.943: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:43.943: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.026: INFO: Exec stderr: "" May 6 23:35:44.030: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.030: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.120: INFO: Exec stderr: "" May 6 23:35:44.123: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.123: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.208: INFO: Exec stderr: "" May 6 23:35:44.211: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.211: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.298: INFO: Exec stderr: "" May 6 23:35:44.301: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.301: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.390: INFO: Exec stderr: "" May 6 23:35:44.394: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.394: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.482: INFO: Exec stderr: "" May 6 23:35:44.485: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.485: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.579: INFO: Exec stderr: "" May 6 23:35:44.581: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.581: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.663: INFO: Exec stderr: "" May 6 23:35:44.665: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.665: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.744: INFO: Exec stderr: "" May 6 23:35:44.746: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.747: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.855: INFO: Exec stderr: "" May 6 23:35:44.858: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.858: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:44.940: INFO: Exec stderr: "" May 6 23:35:44.943: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:44.943: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:45.024: INFO: Exec stderr: "" May 6 23:35:45.027: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:45.027: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:45.122: INFO: Exec stderr: "" May 6 23:35:45.125: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:45.125: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:45.202: INFO: Exec stderr: "" May 6 23:35:45.204: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:45.204: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:45.293: INFO: Exec stderr: "" May 6 23:35:45.296: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:45.296: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:45.376: INFO: Exec stderr: "" May 6 23:35:47.394: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7824"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7824"/host; echo host > "/var/lib/kubelet/mount-propagation-7824"/host/file] Namespace:mount-propagation-7824 PodName:hostexec-node2-vhrmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 6 23:35:47.394: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.494: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.495: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.579: INFO: pod master mount master: stdout: "master", stderr: "" error: May 6 23:35:47.582: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.582: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.668: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:47.670: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.670: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.756: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:47.760: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.760: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.848: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:47.851: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.851: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:47.933: INFO: pod master mount host: stdout: "host", stderr: "" error: May 6 23:35:47.936: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:47.936: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.018: INFO: pod slave mount master: stdout: "master", stderr: "" error: May 6 23:35:48.021: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.021: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.130: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: May 6 23:35:48.133: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.133: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.213: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.217: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.217: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.297: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.300: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.300: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.386: INFO: pod slave mount host: stdout: "host", stderr: "" error: May 6 23:35:48.389: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.389: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.474: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.477: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.477: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.563: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.565: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.565: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.658: INFO: pod private mount private: stdout: "private", stderr: "" error: May 6 23:35:48.661: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.661: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.749: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.752: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.752: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.857: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.860: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.861: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:48.961: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:48.963: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:48.964: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.050: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:49.054: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.054: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.144: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:49.146: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.146: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.242: INFO: pod default mount default: stdout: "default", stderr: "" error: May 6 23:35:49.245: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.245: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.328: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 6 23:35:49.328: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7824"/master/file` = master] Namespace:mount-propagation-7824 PodName:hostexec-node2-vhrmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 6 23:35:49.328: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7824"/slave/file] Namespace:mount-propagation-7824 PodName:hostexec-node2-vhrmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 6 23:35:49.433: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.519: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7824"/host] Namespace:mount-propagation-7824 PodName:hostexec-node2-vhrmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 6 23:35:49.519: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.617: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-7824 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.617: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.723: INFO: Exec stderr: "" May 6 23:35:49.727: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-7824 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.727: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.816: INFO: Exec stderr: "" May 6 23:35:49.819: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-7824 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.819: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:49.910: INFO: Exec stderr: "" May 6 23:35:49.913: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-7824 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 23:35:49.913: INFO: >>> kubeConfig: /root/.kube/config May 6 23:35:50.019: INFO: Exec stderr: "" May 6 23:35:50.019: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-7824"] Namespace:mount-propagation-7824 PodName:hostexec-node2-vhrmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 6 23:35:50.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-vhrmb in namespace mount-propagation-7824 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:35:50.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-7824" for this suite. • [SLOW TEST:24.633 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":7,"skipped":727,"failed":0} May 6 23:35:50.127: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0506 23:33:11.447036 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.447: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.450: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay May 6 23:33:20.967: INFO: watch delete seen for pod-submit-status-2-0 May 6 23:33:20.967: INFO: Pod pod-submit-status-2-0 on node node2 timings total=9.512200735s t=594ms run=0s execute=0s May 6 23:33:22.084: INFO: watch delete seen for pod-submit-status-1-0 May 6 23:33:22.084: INFO: Pod pod-submit-status-1-0 on node node2 timings total=10.629318122s t=522ms run=0s execute=0s May 6 23:33:26.086: INFO: watch delete seen for pod-submit-status-1-1 May 6 23:33:26.086: INFO: Pod pod-submit-status-1-1 on node node2 timings total=4.001795777s t=974ms run=0s execute=0s May 6 23:33:26.883: INFO: watch delete seen for pod-submit-status-2-1 May 6 23:33:26.883: INFO: Pod pod-submit-status-2-1 on node node2 timings total=5.91660058s t=1.873s run=0s execute=0s May 6 23:33:32.487: INFO: watch delete seen for pod-submit-status-0-0 May 6 23:33:32.487: INFO: Pod pod-submit-status-0-0 on node node2 timings total=21.032470877s t=1.682s run=0s execute=0s May 6 23:33:35.189: INFO: watch delete seen for pod-submit-status-2-2 May 6 23:33:35.189: INFO: Pod pod-submit-status-2-2 on node node1 timings total=8.305943804s t=1.672s run=0s execute=0s May 6 23:33:37.047: INFO: watch delete seen for pod-submit-status-1-2 May 6 23:33:37.047: INFO: Pod pod-submit-status-1-2 on node node1 timings total=10.96157082s t=1.714s run=0s execute=0s May 6 23:33:41.577: INFO: watch delete seen for pod-submit-status-2-3 May 6 23:33:41.577: INFO: Pod pod-submit-status-2-3 on node node1 timings total=6.387249734s t=396ms run=0s execute=0s May 6 23:33:41.974: INFO: watch delete seen for pod-submit-status-0-1 May 6 23:33:41.974: INFO: Pod pod-submit-status-0-1 on node node1 timings total=9.486637113s t=749ms run=0s execute=0s May 6 23:33:43.373: INFO: watch delete seen for pod-submit-status-1-3 May 6 23:33:43.373: INFO: Pod pod-submit-status-1-3 on node node1 timings total=6.325503637s t=1.618s run=0s execute=0s May 6 23:33:53.376: INFO: watch delete seen for pod-submit-status-1-4 May 6 23:33:53.376: INFO: Pod pod-submit-status-1-4 on node node1 timings total=10.00284709s t=1.004s run=0s execute=0s May 6 23:33:56.437: INFO: watch delete seen for pod-submit-status-2-4 May 6 23:33:56.437: INFO: Pod pod-submit-status-2-4 on node node1 timings total=14.859926636s t=1.122s run=0s execute=0s May 6 23:33:58.597: INFO: watch delete seen for pod-submit-status-0-2 May 6 23:33:58.597: INFO: Pod pod-submit-status-0-2 on node node2 timings total=16.623358255s t=1.524s run=0s execute=0s May 6 23:33:59.599: INFO: watch delete seen for pod-submit-status-1-5 May 6 23:33:59.599: INFO: Pod pod-submit-status-1-5 on node node2 timings total=6.222899329s t=591ms run=0s execute=0s May 6 23:34:04.377: INFO: watch delete seen for pod-submit-status-2-5 May 6 23:34:04.377: INFO: Pod pod-submit-status-2-5 on node node1 timings total=7.939922278s t=1.59s run=0s execute=0s May 6 23:34:05.375: INFO: watch delete seen for pod-submit-status-1-6 May 6 23:34:05.375: INFO: Pod pod-submit-status-1-6 on node node1 timings total=5.775849453s t=1.437s run=0s execute=0s May 6 23:34:16.691: INFO: watch delete seen for pod-submit-status-2-6 May 6 23:34:16.691: INFO: Pod pod-submit-status-2-6 on node node1 timings total=12.314079407s t=1.86s run=0s execute=0s May 6 23:34:16.823: INFO: watch delete seen for pod-submit-status-0-3 May 6 23:34:16.823: INFO: Pod pod-submit-status-0-3 on node node2 timings total=18.22594655s t=312ms run=0s execute=0s May 6 23:34:19.118: INFO: watch delete seen for pod-submit-status-2-7 May 6 23:34:19.118: INFO: Pod pod-submit-status-2-7 on node node1 timings total=2.427611407s t=228ms run=0s execute=0s May 6 23:34:19.788: INFO: watch delete seen for pod-submit-status-0-4 May 6 23:34:19.788: INFO: Pod pod-submit-status-0-4 on node node2 timings total=2.964648773s t=973ms run=0s execute=0s May 6 23:34:26.753: INFO: watch delete seen for pod-submit-status-2-8 May 6 23:34:26.753: INFO: Pod pod-submit-status-2-8 on node node1 timings total=7.634942522s t=1.068s run=0s execute=0s May 6 23:34:26.800: INFO: watch delete seen for pod-submit-status-0-5 May 6 23:34:26.800: INFO: Pod pod-submit-status-0-5 on node node2 timings total=7.012020343s t=818ms run=0s execute=0s May 6 23:34:29.064: INFO: watch delete seen for pod-submit-status-2-9 May 6 23:34:29.065: INFO: Pod pod-submit-status-2-9 on node node1 timings total=2.311008617s t=486ms run=0s execute=0s May 6 23:34:36.694: INFO: watch delete seen for pod-submit-status-0-6 May 6 23:34:36.694: INFO: Pod pod-submit-status-0-6 on node node1 timings total=9.893389488s t=533ms run=0s execute=0s May 6 23:34:36.787: INFO: watch delete seen for pod-submit-status-2-10 May 6 23:34:36.787: INFO: Pod pod-submit-status-2-10 on node node2 timings total=7.722871872s t=648ms run=0s execute=0s May 6 23:34:46.790: INFO: watch delete seen for pod-submit-status-0-7 May 6 23:34:46.790: INFO: Pod pod-submit-status-0-7 on node node2 timings total=10.096038771s t=464ms run=0s execute=0s May 6 23:34:46.800: INFO: watch delete seen for pod-submit-status-2-11 May 6 23:34:46.800: INFO: Pod pod-submit-status-2-11 on node node2 timings total=10.012917706s t=1.082s run=0s execute=0s May 6 23:34:50.907: INFO: watch delete seen for pod-submit-status-1-7 May 6 23:34:50.907: INFO: Pod pod-submit-status-1-7 on node node2 timings total=45.532475992s t=1.279s run=0s execute=0s May 6 23:34:55.222: INFO: watch delete seen for pod-submit-status-1-8 May 6 23:34:55.222: INFO: Pod pod-submit-status-1-8 on node node1 timings total=4.314577592s t=1.254s run=0s execute=0s May 6 23:34:56.790: INFO: watch delete seen for pod-submit-status-0-8 May 6 23:34:56.790: INFO: Pod pod-submit-status-0-8 on node node2 timings total=10.000114345s t=1.805s run=0s execute=0s May 6 23:34:56.804: INFO: watch delete seen for pod-submit-status-2-12 May 6 23:34:56.804: INFO: Pod pod-submit-status-2-12 on node node1 timings total=10.003632542s t=1.932s run=3s execute=0s May 6 23:34:57.027: INFO: watch delete seen for pod-submit-status-2-13 May 6 23:34:57.027: INFO: Pod pod-submit-status-2-13 on node node1 timings total=222.985126ms t=58ms run=0s execute=0s May 6 23:35:01.236: INFO: watch delete seen for pod-submit-status-2-14 May 6 23:35:01.236: INFO: Pod pod-submit-status-2-14 on node node1 timings total=4.209112806s t=396ms run=0s execute=0s May 6 23:35:06.691: INFO: watch delete seen for pod-submit-status-0-9 May 6 23:35:06.691: INFO: Pod pod-submit-status-0-9 on node node1 timings total=9.901094437s t=1.396s run=0s execute=0s May 6 23:35:06.803: INFO: watch delete seen for pod-submit-status-1-9 May 6 23:35:06.803: INFO: Pod pod-submit-status-1-9 on node node2 timings total=11.581353417s t=154ms run=0s execute=0s May 6 23:35:16.695: INFO: watch delete seen for pod-submit-status-1-10 May 6 23:35:16.695: INFO: Pod pod-submit-status-1-10 on node node1 timings total=9.89141948s t=849ms run=0s execute=0s May 6 23:35:16.791: INFO: watch delete seen for pod-submit-status-0-10 May 6 23:35:16.792: INFO: Pod pod-submit-status-0-10 on node node2 timings total=10.100317442s t=963ms run=0s execute=0s May 6 23:35:26.905: INFO: watch delete seen for pod-submit-status-1-11 May 6 23:35:26.906: INFO: Pod pod-submit-status-1-11 on node node1 timings total=10.210804251s t=462ms run=0s execute=0s May 6 23:35:36.691: INFO: watch delete seen for pod-submit-status-1-12 May 6 23:35:36.691: INFO: Pod pod-submit-status-1-12 on node node1 timings total=9.785412334s t=1.904s run=3s execute=0s May 6 23:35:46.709: INFO: watch delete seen for pod-submit-status-1-13 May 6 23:35:46.709: INFO: Pod pod-submit-status-1-13 on node node1 timings total=10.017716466s t=1.983s run=3s execute=0s May 6 23:35:51.195: INFO: watch delete seen for pod-submit-status-0-11 May 6 23:35:51.195: INFO: Pod pod-submit-status-0-11 on node node2 timings total=34.40357267s t=1.118s run=0s execute=0s May 6 23:35:56.952: INFO: watch delete seen for pod-submit-status-1-14 May 6 23:35:56.952: INFO: Pod pod-submit-status-1-14 on node node1 timings total=10.243010082s t=1.577s run=0s execute=0s May 6 23:36:05.691: INFO: watch delete seen for pod-submit-status-0-12 May 6 23:36:05.691: INFO: Pod pod-submit-status-0-12 on node node1 timings total=14.495633349s t=1.204s run=0s execute=0s May 6 23:36:51.967: INFO: watch delete seen for pod-submit-status-0-13 May 6 23:36:51.967: INFO: Pod pod-submit-status-0-13 on node node2 timings total=46.276157117s t=76ms run=0s execute=0s May 6 23:37:06.790: INFO: watch delete seen for pod-submit-status-0-14 May 6 23:37:06.790: INFO: Pod pod-submit-status-0-14 on node node2 timings total=14.823145945s t=1.522s run=3s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:37:06.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1191" for this suite. • [SLOW TEST:235.384 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":25,"failed":0} May 6 23:37:06.804: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:33.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-a6e9f4f1-e7b6-4df7-b978-9a871e384134 in namespace container-probe-7686 May 6 23:33:37.773: INFO: Started pod startup-a6e9f4f1-e7b6-4df7-b978-9a871e384134 in namespace container-probe-7686 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:33:37.775: INFO: Initial restart count of pod startup-a6e9f4f1-e7b6-4df7-b978-9a871e384134 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:37:38.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7686" for this suite. • [SLOW TEST:244.597 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":132,"failed":0} May 6 23:37:38.331: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:34:11.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-897a1b3d-0ca2-4555-9fa3-f954bfa1bdb7 in namespace container-probe-2998 May 6 23:34:15.042: INFO: Started pod liveness-897a1b3d-0ca2-4555-9fa3-f954bfa1bdb7 in namespace container-probe-2998 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:34:15.045: INFO: Initial restart count of pod liveness-897a1b3d-0ca2-4555-9fa3-f954bfa1bdb7 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:38:15.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2998" for this suite. • [SLOW TEST:244.625 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":5,"skipped":714,"failed":0} May 6 23:38:15.629: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0506 23:33:11.463128 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.463: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.466: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 May 6 23:33:11.489: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:13.494: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:15.494: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:17.494: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:19.492: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:21.492: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 May 6 23:35:19.705: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-05-06 23:34:26 +0000 UTC restartedAt=2022-05-06 23:35:18 +0000 UTC (52s) STEP: getting restart delay-1 May 6 23:36:48.044: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-05-06 23:35:23 +0000 UTC restartedAt=2022-05-06 23:36:47 +0000 UTC (1m24s) STEP: getting restart delay-2 May 6 23:39:37.764: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-05-06 23:36:52 +0000 UTC restartedAt=2022-05-06 23:39:37 +0000 UTC (2m45s) STEP: updating the image May 6 23:39:38.274: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 6 23:40:03.346: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-05-06 23:39:47 +0000 UTC restartedAt=2022-05-06 23:40:02 +0000 UTC (15s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:40:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6197" for this suite. • [SLOW TEST:411.926 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":30,"failed":0} May 6 23:40:03.359: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:35:26.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready May 6 23:35:26.247: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 6 23:35:27.258: INFO: node status heartbeat is unchanged for 1.003612194s, waiting for 1m20s May 6 23:35:28.259: INFO: node status heartbeat is unchanged for 2.005158535s, waiting for 1m20s May 6 23:35:29.258: INFO: node status heartbeat is unchanged for 3.004070888s, waiting for 1m20s May 6 23:35:30.261: INFO: node status heartbeat is unchanged for 4.006444391s, waiting for 1m20s May 6 23:35:31.258: INFO: node status heartbeat is unchanged for 5.003468025s, waiting for 1m20s May 6 23:35:32.257: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:35:32.262: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:21 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:35:33.257: INFO: node status heartbeat is unchanged for 1.000143882s, waiting for 1m20s May 6 23:35:34.258: INFO: node status heartbeat is unchanged for 2.00071478s, waiting for 1m20s May 6 23:35:35.258: INFO: node status heartbeat is unchanged for 3.000356111s, waiting for 1m20s May 6 23:35:36.258: INFO: node status heartbeat is unchanged for 4.000934165s, waiting for 1m20s May 6 23:35:37.258: INFO: node status heartbeat is unchanged for 5.000459397s, waiting for 1m20s May 6 23:35:38.259: INFO: node status heartbeat is unchanged for 6.002078077s, waiting for 1m20s May 6 23:35:39.258: INFO: node status heartbeat is unchanged for 7.00120995s, waiting for 1m20s May 6 23:35:40.258: INFO: node status heartbeat is unchanged for 8.000361321s, waiting for 1m20s May 6 23:35:41.257: INFO: node status heartbeat is unchanged for 8.999874111s, waiting for 1m20s May 6 23:35:42.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:35:42.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:35:43.258: INFO: node status heartbeat is unchanged for 998.858621ms, waiting for 1m20s May 6 23:35:44.260: INFO: node status heartbeat is unchanged for 2.000649995s, waiting for 1m20s May 6 23:35:45.258: INFO: node status heartbeat is unchanged for 2.998218244s, waiting for 1m20s May 6 23:35:46.259: INFO: node status heartbeat is unchanged for 3.999418299s, waiting for 1m20s May 6 23:35:47.260: INFO: node status heartbeat is unchanged for 5.000185359s, waiting for 1m20s May 6 23:35:48.260: INFO: node status heartbeat is unchanged for 6.000412928s, waiting for 1m20s May 6 23:35:49.259: INFO: node status heartbeat is unchanged for 6.999727494s, waiting for 1m20s May 6 23:35:50.258: INFO: node status heartbeat is unchanged for 7.998902906s, waiting for 1m20s May 6 23:35:51.259: INFO: node status heartbeat is unchanged for 8.999218515s, waiting for 1m20s May 6 23:35:52.259: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 6 23:35:52.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "c77ab26e59394c64a4d3ca530c1cefb5", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "0fe5c664-0bc1-49bd-8b38-c77825eebe76", KernelVersion: "3.10.0-1160.62.1.el7.x86_64", ...}, Images: []v1.ContainerImage{ ... // 20 identical elements {Names: {"k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d"..., "k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2"}, SizeBytes: 44576952}, {Names: {"localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70f"..., "localhost:30500/sriov-device-plugin:v3.3.2"}, SizeBytes: 42676189}, + { + Names: []string{ + "k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34d"..., + "k8s.gcr.io/e2e-test-images/nonroot:1.1", + }, + SizeBytes: 42321438, + }, {Names: {"kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f"..., "kubernetesui/metrics-scraper:v1.0.6"}, SizeBytes: 34548789}, {Names: {"localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa67"..., "localhost:30500/tasextender:0.4"}, SizeBytes: 28910791}, ... // 5 identical elements {Names: {"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., "k8s.gcr.io/e2e-test-images/nonewprivs:1.3"}, SizeBytes: 7107254}, {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, + { + Names: []string{ + "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., + "gcr.io/authenticated-image-pulling/alpine:3.7", + }, + SizeBytes: 4206620, + }, {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361}, {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369}, ... // 2 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } May 6 23:35:53.260: INFO: node status heartbeat is unchanged for 1.001194984s, waiting for 1m20s May 6 23:35:54.258: INFO: node status heartbeat is unchanged for 1.999653768s, waiting for 1m20s May 6 23:35:55.258: INFO: node status heartbeat is unchanged for 2.999477529s, waiting for 1m20s May 6 23:35:56.258: INFO: node status heartbeat is unchanged for 3.999278165s, waiting for 1m20s May 6 23:35:57.258: INFO: node status heartbeat is unchanged for 4.999179243s, waiting for 1m20s May 6 23:35:58.261: INFO: node status heartbeat is unchanged for 6.002032131s, waiting for 1m20s May 6 23:35:59.261: INFO: node status heartbeat is unchanged for 7.001997451s, waiting for 1m20s May 6 23:36:00.259: INFO: node status heartbeat is unchanged for 8.000645634s, waiting for 1m20s May 6 23:36:01.259: INFO: node status heartbeat is unchanged for 9.000013071s, waiting for 1m20s May 6 23:36:02.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:02.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:35:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:03.262: INFO: node status heartbeat is unchanged for 1.002350053s, waiting for 1m20s May 6 23:36:04.261: INFO: node status heartbeat is unchanged for 2.000780959s, waiting for 1m20s May 6 23:36:05.258: INFO: node status heartbeat is unchanged for 2.998587792s, waiting for 1m20s May 6 23:36:06.260: INFO: node status heartbeat is unchanged for 3.999659042s, waiting for 1m20s May 6 23:36:07.259: INFO: node status heartbeat is unchanged for 4.998748934s, waiting for 1m20s May 6 23:36:08.259: INFO: node status heartbeat is unchanged for 5.998742146s, waiting for 1m20s May 6 23:36:09.259: INFO: node status heartbeat is unchanged for 6.998746559s, waiting for 1m20s May 6 23:36:10.259: INFO: node status heartbeat is unchanged for 7.998973414s, waiting for 1m20s May 6 23:36:11.260: INFO: node status heartbeat is unchanged for 8.999885583s, waiting for 1m20s May 6 23:36:12.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:12.267: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:13.258: INFO: node status heartbeat is unchanged for 998.823512ms, waiting for 1m20s May 6 23:36:14.261: INFO: node status heartbeat is unchanged for 2.00197173s, waiting for 1m20s May 6 23:36:15.261: INFO: node status heartbeat is unchanged for 3.001244774s, waiting for 1m20s May 6 23:36:16.259: INFO: node status heartbeat is unchanged for 3.999971873s, waiting for 1m20s May 6 23:36:17.260: INFO: node status heartbeat is unchanged for 5.001084413s, waiting for 1m20s May 6 23:36:18.261: INFO: node status heartbeat is unchanged for 6.0013727s, waiting for 1m20s May 6 23:36:19.260: INFO: node status heartbeat is unchanged for 7.001035298s, waiting for 1m20s May 6 23:36:20.261: INFO: node status heartbeat is unchanged for 8.002101333s, waiting for 1m20s May 6 23:36:21.260: INFO: node status heartbeat is unchanged for 9.000426517s, waiting for 1m20s May 6 23:36:22.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:22.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:23.259: INFO: node status heartbeat is unchanged for 1.00042842s, waiting for 1m20s May 6 23:36:24.259: INFO: node status heartbeat is unchanged for 2.00019936s, waiting for 1m20s May 6 23:36:25.258: INFO: node status heartbeat is unchanged for 2.999817285s, waiting for 1m20s May 6 23:36:26.260: INFO: node status heartbeat is unchanged for 4.000992733s, waiting for 1m20s May 6 23:36:27.260: INFO: node status heartbeat is unchanged for 5.001053707s, waiting for 1m20s May 6 23:36:28.259: INFO: node status heartbeat is unchanged for 6.000402035s, waiting for 1m20s May 6 23:36:29.259: INFO: node status heartbeat is unchanged for 7.000834957s, waiting for 1m20s May 6 23:36:30.259: INFO: node status heartbeat is unchanged for 8.00036563s, waiting for 1m20s May 6 23:36:31.260: INFO: node status heartbeat is unchanged for 9.001113125s, waiting for 1m20s May 6 23:36:32.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:32.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:33.259: INFO: node status heartbeat is unchanged for 999.078621ms, waiting for 1m20s May 6 23:36:34.259: INFO: node status heartbeat is unchanged for 1.9990351s, waiting for 1m20s May 6 23:36:35.262: INFO: node status heartbeat is unchanged for 3.002564947s, waiting for 1m20s May 6 23:36:36.259: INFO: node status heartbeat is unchanged for 3.998871329s, waiting for 1m20s May 6 23:36:37.260: INFO: node status heartbeat is unchanged for 4.999838912s, waiting for 1m20s May 6 23:36:38.258: INFO: node status heartbeat is unchanged for 5.998165522s, waiting for 1m20s May 6 23:36:39.262: INFO: node status heartbeat is unchanged for 7.002160681s, waiting for 1m20s May 6 23:36:40.258: INFO: node status heartbeat is unchanged for 7.998081966s, waiting for 1m20s May 6 23:36:41.258: INFO: node status heartbeat is unchanged for 8.998349451s, waiting for 1m20s May 6 23:36:42.259: INFO: node status heartbeat is unchanged for 9.999152411s, waiting for 1m20s May 6 23:36:43.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:43.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:44.258: INFO: node status heartbeat is unchanged for 998.281596ms, waiting for 1m20s May 6 23:36:45.259: INFO: node status heartbeat is unchanged for 1.999539309s, waiting for 1m20s May 6 23:36:46.259: INFO: node status heartbeat is unchanged for 2.998764471s, waiting for 1m20s May 6 23:36:47.259: INFO: node status heartbeat is unchanged for 3.998788782s, waiting for 1m20s May 6 23:36:48.258: INFO: node status heartbeat is unchanged for 4.998403668s, waiting for 1m20s May 6 23:36:49.258: INFO: node status heartbeat is unchanged for 5.998245135s, waiting for 1m20s May 6 23:36:50.259: INFO: node status heartbeat is unchanged for 6.998868096s, waiting for 1m20s May 6 23:36:51.258: INFO: node status heartbeat is unchanged for 7.998477266s, waiting for 1m20s May 6 23:36:52.258: INFO: node status heartbeat is unchanged for 8.998275458s, waiting for 1m20s May 6 23:36:53.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:36:53.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:36:54.261: INFO: node status heartbeat is unchanged for 1.001751455s, waiting for 1m20s May 6 23:36:55.259: INFO: node status heartbeat is unchanged for 2.000658636s, waiting for 1m20s May 6 23:36:56.259: INFO: node status heartbeat is unchanged for 3.000642616s, waiting for 1m20s May 6 23:36:57.260: INFO: node status heartbeat is unchanged for 4.000722589s, waiting for 1m20s May 6 23:36:58.259: INFO: node status heartbeat is unchanged for 5.000065649s, waiting for 1m20s May 6 23:36:59.258: INFO: node status heartbeat is unchanged for 5.998841295s, waiting for 1m20s May 6 23:37:00.260: INFO: node status heartbeat is unchanged for 7.000732365s, waiting for 1m20s May 6 23:37:01.259: INFO: node status heartbeat is unchanged for 7.999879437s, waiting for 1m20s May 6 23:37:02.259: INFO: node status heartbeat is unchanged for 9.000331462s, waiting for 1m20s May 6 23:37:03.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:03.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:36:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:04.258: INFO: node status heartbeat is unchanged for 998.50126ms, waiting for 1m20s May 6 23:37:05.258: INFO: node status heartbeat is unchanged for 1.998898681s, waiting for 1m20s May 6 23:37:06.259: INFO: node status heartbeat is unchanged for 2.99975709s, waiting for 1m20s May 6 23:37:07.259: INFO: node status heartbeat is unchanged for 3.999714561s, waiting for 1m20s May 6 23:37:08.259: INFO: node status heartbeat is unchanged for 5.000111745s, waiting for 1m20s May 6 23:37:09.259: INFO: node status heartbeat is unchanged for 5.999503363s, waiting for 1m20s May 6 23:37:10.259: INFO: node status heartbeat is unchanged for 6.999348689s, waiting for 1m20s May 6 23:37:11.259: INFO: node status heartbeat is unchanged for 7.999701606s, waiting for 1m20s May 6 23:37:12.258: INFO: node status heartbeat is unchanged for 8.998637561s, waiting for 1m20s May 6 23:37:13.258: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:13.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:14.261: INFO: node status heartbeat is unchanged for 1.003179876s, waiting for 1m20s May 6 23:37:15.258: INFO: node status heartbeat is unchanged for 2.000149636s, waiting for 1m20s May 6 23:37:16.259: INFO: node status heartbeat is unchanged for 3.000947736s, waiting for 1m20s May 6 23:37:17.258: INFO: node status heartbeat is unchanged for 3.999468503s, waiting for 1m20s May 6 23:37:18.262: INFO: node status heartbeat is unchanged for 5.003707294s, waiting for 1m20s May 6 23:37:19.259: INFO: node status heartbeat is unchanged for 6.000617012s, waiting for 1m20s May 6 23:37:20.260: INFO: node status heartbeat is unchanged for 7.00173039s, waiting for 1m20s May 6 23:37:21.259: INFO: node status heartbeat is unchanged for 8.001198076s, waiting for 1m20s May 6 23:37:22.260: INFO: node status heartbeat is unchanged for 9.001469006s, waiting for 1m20s May 6 23:37:23.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:23.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:24.261: INFO: node status heartbeat is unchanged for 1.001835635s, waiting for 1m20s May 6 23:37:25.258: INFO: node status heartbeat is unchanged for 1.999097472s, waiting for 1m20s May 6 23:37:26.259: INFO: node status heartbeat is unchanged for 2.999867149s, waiting for 1m20s May 6 23:37:27.259: INFO: node status heartbeat is unchanged for 3.999684058s, waiting for 1m20s May 6 23:37:28.258: INFO: node status heartbeat is unchanged for 4.998660347s, waiting for 1m20s May 6 23:37:29.257: INFO: node status heartbeat is unchanged for 5.998366215s, waiting for 1m20s May 6 23:37:30.259: INFO: node status heartbeat is unchanged for 7.000137394s, waiting for 1m20s May 6 23:37:31.259: INFO: node status heartbeat is unchanged for 7.999696526s, waiting for 1m20s May 6 23:37:32.258: INFO: node status heartbeat is unchanged for 8.998776258s, waiting for 1m20s May 6 23:37:33.257: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:33.262: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:34.259: INFO: node status heartbeat is unchanged for 1.001490567s, waiting for 1m20s May 6 23:37:35.260: INFO: node status heartbeat is unchanged for 2.002969627s, waiting for 1m20s May 6 23:37:36.258: INFO: node status heartbeat is unchanged for 3.000820607s, waiting for 1m20s May 6 23:37:37.260: INFO: node status heartbeat is unchanged for 4.00252681s, waiting for 1m20s May 6 23:37:38.260: INFO: node status heartbeat is unchanged for 5.002123113s, waiting for 1m20s May 6 23:37:39.260: INFO: node status heartbeat is unchanged for 6.002183517s, waiting for 1m20s May 6 23:37:40.260: INFO: node status heartbeat is unchanged for 7.002272113s, waiting for 1m20s May 6 23:37:41.259: INFO: node status heartbeat is unchanged for 8.001540034s, waiting for 1m20s May 6 23:37:42.261: INFO: node status heartbeat is unchanged for 9.003939851s, waiting for 1m20s May 6 23:37:43.261: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:43.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:44.261: INFO: node status heartbeat is unchanged for 1.00027807s, waiting for 1m20s May 6 23:37:45.259: INFO: node status heartbeat is unchanged for 1.997920983s, waiting for 1m20s May 6 23:37:46.258: INFO: node status heartbeat is unchanged for 2.996818724s, waiting for 1m20s May 6 23:37:47.262: INFO: node status heartbeat is unchanged for 4.000999835s, waiting for 1m20s May 6 23:37:48.260: INFO: node status heartbeat is unchanged for 4.999559791s, waiting for 1m20s May 6 23:37:49.260: INFO: node status heartbeat is unchanged for 5.998678827s, waiting for 1m20s May 6 23:37:50.261: INFO: node status heartbeat is unchanged for 7.000421374s, waiting for 1m20s May 6 23:37:51.258: INFO: node status heartbeat is unchanged for 7.997079786s, waiting for 1m20s May 6 23:37:52.260: INFO: node status heartbeat is unchanged for 8.999211286s, waiting for 1m20s May 6 23:37:53.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:37:53.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:37:54.260: INFO: node status heartbeat is unchanged for 1.000468999s, waiting for 1m20s May 6 23:37:55.262: INFO: node status heartbeat is unchanged for 2.002355276s, waiting for 1m20s May 6 23:37:56.258: INFO: node status heartbeat is unchanged for 2.998530115s, waiting for 1m20s May 6 23:37:57.258: INFO: node status heartbeat is unchanged for 3.998503516s, waiting for 1m20s May 6 23:37:58.262: INFO: node status heartbeat is unchanged for 5.002877098s, waiting for 1m20s May 6 23:37:59.261: INFO: node status heartbeat is unchanged for 6.001800516s, waiting for 1m20s May 6 23:38:00.260: INFO: node status heartbeat is unchanged for 7.000620876s, waiting for 1m20s May 6 23:38:01.258: INFO: node status heartbeat is unchanged for 7.998596202s, waiting for 1m20s May 6 23:38:02.261: INFO: node status heartbeat is unchanged for 9.001793974s, waiting for 1m20s May 6 23:38:03.262: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:03.266: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:37:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:04.259: INFO: node status heartbeat is unchanged for 997.316629ms, waiting for 1m20s May 6 23:38:05.260: INFO: node status heartbeat is unchanged for 1.997832768s, waiting for 1m20s May 6 23:38:06.259: INFO: node status heartbeat is unchanged for 2.997590781s, waiting for 1m20s May 6 23:38:07.298: INFO: node status heartbeat is unchanged for 4.036434294s, waiting for 1m20s May 6 23:38:08.261: INFO: node status heartbeat is unchanged for 4.998720046s, waiting for 1m20s May 6 23:38:09.259: INFO: node status heartbeat is unchanged for 5.997578284s, waiting for 1m20s May 6 23:38:10.259: INFO: node status heartbeat is unchanged for 6.997245516s, waiting for 1m20s May 6 23:38:11.259: INFO: node status heartbeat is unchanged for 7.996723223s, waiting for 1m20s May 6 23:38:12.261: INFO: node status heartbeat is unchanged for 8.998768525s, waiting for 1m20s May 6 23:38:13.261: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:13.268: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:14.261: INFO: node status heartbeat is unchanged for 999.539416ms, waiting for 1m20s May 6 23:38:15.258: INFO: node status heartbeat is unchanged for 1.996859918s, waiting for 1m20s May 6 23:38:16.259: INFO: node status heartbeat is unchanged for 2.997864389s, waiting for 1m20s May 6 23:38:17.261: INFO: node status heartbeat is unchanged for 4.000263605s, waiting for 1m20s May 6 23:38:18.261: INFO: node status heartbeat is unchanged for 4.999824856s, waiting for 1m20s May 6 23:38:19.259: INFO: node status heartbeat is unchanged for 5.997984095s, waiting for 1m20s May 6 23:38:20.259: INFO: node status heartbeat is unchanged for 6.997646548s, waiting for 1m20s May 6 23:38:21.260: INFO: node status heartbeat is unchanged for 7.998607543s, waiting for 1m20s May 6 23:38:22.260: INFO: node status heartbeat is unchanged for 8.998573737s, waiting for 1m20s May 6 23:38:23.258: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:23.262: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:24.260: INFO: node status heartbeat is unchanged for 1.002186786s, waiting for 1m20s May 6 23:38:25.262: INFO: node status heartbeat is unchanged for 2.003729551s, waiting for 1m20s May 6 23:38:26.258: INFO: node status heartbeat is unchanged for 3.000153602s, waiting for 1m20s May 6 23:38:27.260: INFO: node status heartbeat is unchanged for 4.002204966s, waiting for 1m20s May 6 23:38:28.260: INFO: node status heartbeat is unchanged for 5.001871012s, waiting for 1m20s May 6 23:38:29.263: INFO: node status heartbeat is unchanged for 6.00483848s, waiting for 1m20s May 6 23:38:30.261: INFO: node status heartbeat is unchanged for 7.002946154s, waiting for 1m20s May 6 23:38:31.258: INFO: node status heartbeat is unchanged for 8.000130626s, waiting for 1m20s May 6 23:38:32.262: INFO: node status heartbeat is unchanged for 9.004209603s, waiting for 1m20s May 6 23:38:33.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:33.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:34.260: INFO: node status heartbeat is unchanged for 1.000079489s, waiting for 1m20s May 6 23:38:35.260: INFO: node status heartbeat is unchanged for 1.999877667s, waiting for 1m20s May 6 23:38:36.259: INFO: node status heartbeat is unchanged for 2.998532844s, waiting for 1m20s May 6 23:38:37.259: INFO: node status heartbeat is unchanged for 3.999095742s, waiting for 1m20s May 6 23:38:38.261: INFO: node status heartbeat is unchanged for 5.00089396s, waiting for 1m20s May 6 23:38:39.260: INFO: node status heartbeat is unchanged for 5.999986348s, waiting for 1m20s May 6 23:38:40.261: INFO: node status heartbeat is unchanged for 7.000614279s, waiting for 1m20s May 6 23:38:41.259: INFO: node status heartbeat is unchanged for 7.999039301s, waiting for 1m20s May 6 23:38:42.259: INFO: node status heartbeat is unchanged for 8.998393373s, waiting for 1m20s May 6 23:38:43.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:43.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:44.260: INFO: node status heartbeat is unchanged for 1.000815077s, waiting for 1m20s May 6 23:38:45.257: INFO: node status heartbeat is unchanged for 1.997743443s, waiting for 1m20s May 6 23:38:46.258: INFO: node status heartbeat is unchanged for 2.998202781s, waiting for 1m20s May 6 23:38:47.259: INFO: node status heartbeat is unchanged for 3.999290363s, waiting for 1m20s May 6 23:38:48.259: INFO: node status heartbeat is unchanged for 4.999525511s, waiting for 1m20s May 6 23:38:49.259: INFO: node status heartbeat is unchanged for 5.999330624s, waiting for 1m20s May 6 23:38:50.258: INFO: node status heartbeat is unchanged for 6.998014764s, waiting for 1m20s May 6 23:38:51.259: INFO: node status heartbeat is unchanged for 7.999236433s, waiting for 1m20s May 6 23:38:52.260: INFO: node status heartbeat is unchanged for 8.999906458s, waiting for 1m20s May 6 23:38:53.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:38:53.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:38:54.260: INFO: node status heartbeat is unchanged for 1.001001153s, waiting for 1m20s May 6 23:38:55.259: INFO: node status heartbeat is unchanged for 2.000670891s, waiting for 1m20s May 6 23:38:56.258: INFO: node status heartbeat is unchanged for 2.999801538s, waiting for 1m20s May 6 23:38:57.260: INFO: node status heartbeat is unchanged for 4.001038698s, waiting for 1m20s May 6 23:38:58.259: INFO: node status heartbeat is unchanged for 5.000042779s, waiting for 1m20s May 6 23:38:59.259: INFO: node status heartbeat is unchanged for 6.00091797s, waiting for 1m20s May 6 23:39:00.259: INFO: node status heartbeat is unchanged for 7.000025411s, waiting for 1m20s May 6 23:39:01.259: INFO: node status heartbeat is unchanged for 8.000865517s, waiting for 1m20s May 6 23:39:02.260: INFO: node status heartbeat is unchanged for 9.001155533s, waiting for 1m20s May 6 23:39:03.261: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 6 23:39:03.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:38:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:04.259: INFO: node status heartbeat is unchanged for 998.005321ms, waiting for 1m20s May 6 23:39:05.258: INFO: node status heartbeat is unchanged for 1.997183165s, waiting for 1m20s May 6 23:39:06.259: INFO: node status heartbeat is unchanged for 2.99845287s, waiting for 1m20s May 6 23:39:07.259: INFO: node status heartbeat is unchanged for 3.998657777s, waiting for 1m20s May 6 23:39:08.259: INFO: node status heartbeat is unchanged for 4.998223016s, waiting for 1m20s May 6 23:39:09.259: INFO: node status heartbeat is unchanged for 5.998263604s, waiting for 1m20s May 6 23:39:10.259: INFO: node status heartbeat is unchanged for 6.998156061s, waiting for 1m20s May 6 23:39:11.260: INFO: node status heartbeat is unchanged for 7.999029862s, waiting for 1m20s May 6 23:39:12.260: INFO: node status heartbeat is unchanged for 8.99972679s, waiting for 1m20s May 6 23:39:13.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:39:13.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:14.261: INFO: node status heartbeat is unchanged for 1.00134114s, waiting for 1m20s May 6 23:39:15.259: INFO: node status heartbeat is unchanged for 1.999201756s, waiting for 1m20s May 6 23:39:16.258: INFO: node status heartbeat is unchanged for 2.998269072s, waiting for 1m20s May 6 23:39:17.259: INFO: node status heartbeat is unchanged for 3.999894149s, waiting for 1m20s May 6 23:39:18.259: INFO: node status heartbeat is unchanged for 4.999528762s, waiting for 1m20s May 6 23:39:19.260: INFO: node status heartbeat is unchanged for 6.000389902s, waiting for 1m20s May 6 23:39:20.261: INFO: node status heartbeat is unchanged for 7.001277639s, waiting for 1m20s May 6 23:39:21.260: INFO: node status heartbeat is unchanged for 8.00013952s, waiting for 1m20s May 6 23:39:22.258: INFO: node status heartbeat is unchanged for 8.998788961s, waiting for 1m20s May 6 23:39:23.259: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:39:23.264: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:24.261: INFO: node status heartbeat is unchanged for 1.001457569s, waiting for 1m20s May 6 23:39:25.259: INFO: node status heartbeat is unchanged for 1.999334817s, waiting for 1m20s May 6 23:39:26.258: INFO: node status heartbeat is unchanged for 2.999218089s, waiting for 1m20s May 6 23:39:27.259: INFO: node status heartbeat is unchanged for 4.000006467s, waiting for 1m20s May 6 23:39:28.259: INFO: node status heartbeat is unchanged for 4.999776938s, waiting for 1m20s May 6 23:39:29.259: INFO: node status heartbeat is unchanged for 5.999596466s, waiting for 1m20s May 6 23:39:30.259: INFO: node status heartbeat is unchanged for 6.999661786s, waiting for 1m20s May 6 23:39:31.258: INFO: node status heartbeat is unchanged for 7.999166267s, waiting for 1m20s May 6 23:39:32.259: INFO: node status heartbeat is unchanged for 8.999959669s, waiting for 1m20s May 6 23:39:33.258: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:39:33.262: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:34.260: INFO: node status heartbeat is unchanged for 1.001760325s, waiting for 1m20s May 6 23:39:35.261: INFO: node status heartbeat is unchanged for 2.003250467s, waiting for 1m20s May 6 23:39:36.259: INFO: node status heartbeat is unchanged for 3.001234348s, waiting for 1m20s May 6 23:39:37.259: INFO: node status heartbeat is unchanged for 4.001032354s, waiting for 1m20s May 6 23:39:38.259: INFO: node status heartbeat is unchanged for 5.001327005s, waiting for 1m20s May 6 23:39:39.261: INFO: node status heartbeat is unchanged for 6.002930576s, waiting for 1m20s May 6 23:39:40.261: INFO: node status heartbeat is unchanged for 7.0028532s, waiting for 1m20s May 6 23:39:41.257: INFO: node status heartbeat is unchanged for 7.999184591s, waiting for 1m20s May 6 23:39:42.261: INFO: node status heartbeat is unchanged for 9.002598387s, waiting for 1m20s May 6 23:39:43.260: INFO: node status heartbeat is unchanged for 10.001808423s, waiting for 1m20s May 6 23:39:44.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:39:44.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:45.259: INFO: node status heartbeat is unchanged for 998.98923ms, waiting for 1m20s May 6 23:39:46.259: INFO: node status heartbeat is unchanged for 1.998993829s, waiting for 1m20s May 6 23:39:47.261: INFO: node status heartbeat is unchanged for 3.000941063s, waiting for 1m20s May 6 23:39:48.260: INFO: node status heartbeat is unchanged for 4.000553213s, waiting for 1m20s May 6 23:39:49.261: INFO: node status heartbeat is unchanged for 5.000632518s, waiting for 1m20s May 6 23:39:50.260: INFO: node status heartbeat is unchanged for 6.000583641s, waiting for 1m20s May 6 23:39:51.260: INFO: node status heartbeat is unchanged for 7.000344584s, waiting for 1m20s May 6 23:39:52.260: INFO: node status heartbeat is unchanged for 8.000301508s, waiting for 1m20s May 6 23:39:53.261: INFO: node status heartbeat is unchanged for 9.000809292s, waiting for 1m20s May 6 23:39:54.261: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:39:54.265: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:39:55.260: INFO: node status heartbeat is unchanged for 998.816361ms, waiting for 1m20s May 6 23:39:56.258: INFO: node status heartbeat is unchanged for 1.997558575s, waiting for 1m20s May 6 23:39:57.261: INFO: node status heartbeat is unchanged for 3.000155566s, waiting for 1m20s May 6 23:39:58.260: INFO: node status heartbeat is unchanged for 3.998990174s, waiting for 1m20s May 6 23:39:59.261: INFO: node status heartbeat is unchanged for 4.999798982s, waiting for 1m20s May 6 23:40:00.260: INFO: node status heartbeat is unchanged for 5.998695176s, waiting for 1m20s May 6 23:40:01.260: INFO: node status heartbeat is unchanged for 6.99870354s, waiting for 1m20s May 6 23:40:02.258: INFO: node status heartbeat is unchanged for 7.997676892s, waiting for 1m20s May 6 23:40:03.262: INFO: node status heartbeat is unchanged for 9.001321834s, waiting for 1m20s May 6 23:40:04.258: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:40:04.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:39:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:40:05.259: INFO: node status heartbeat is unchanged for 1.001046533s, waiting for 1m20s May 6 23:40:06.260: INFO: node status heartbeat is unchanged for 2.001323742s, waiting for 1m20s May 6 23:40:07.261: INFO: node status heartbeat is unchanged for 3.002318763s, waiting for 1m20s May 6 23:40:08.260: INFO: node status heartbeat is unchanged for 4.002050869s, waiting for 1m20s May 6 23:40:09.258: INFO: node status heartbeat is unchanged for 5.000076447s, waiting for 1m20s May 6 23:40:10.258: INFO: node status heartbeat is unchanged for 5.999990308s, waiting for 1m20s May 6 23:40:11.259: INFO: node status heartbeat is unchanged for 7.000623804s, waiting for 1m20s May 6 23:40:12.258: INFO: node status heartbeat is unchanged for 7.999930393s, waiting for 1m20s May 6 23:40:13.260: INFO: node status heartbeat is unchanged for 9.001290667s, waiting for 1m20s May 6 23:40:14.260: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:40:14.266: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:40:15.259: INFO: node status heartbeat is unchanged for 999.768821ms, waiting for 1m20s May 6 23:40:16.260: INFO: node status heartbeat is unchanged for 2.000453052s, waiting for 1m20s May 6 23:40:17.258: INFO: node status heartbeat is unchanged for 2.999164889s, waiting for 1m20s May 6 23:40:18.260: INFO: node status heartbeat is unchanged for 4.000765946s, waiting for 1m20s May 6 23:40:19.261: INFO: node status heartbeat is unchanged for 5.001398347s, waiting for 1m20s May 6 23:40:20.259: INFO: node status heartbeat is unchanged for 5.999854675s, waiting for 1m20s May 6 23:40:21.259: INFO: node status heartbeat is unchanged for 6.999355566s, waiting for 1m20s May 6 23:40:22.262: INFO: node status heartbeat is unchanged for 8.002531057s, waiting for 1m20s May 6 23:40:23.261: INFO: node status heartbeat is unchanged for 9.002006583s, waiting for 1m20s May 6 23:40:24.258: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 6 23:40:24.263: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:13:27 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-06 23:40:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-06 20:09:17 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-06 20:10:27 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 6 23:40:25.258: INFO: node status heartbeat is unchanged for 999.887832ms, waiting for 1m20s May 6 23:40:26.261: INFO: node status heartbeat is unchanged for 2.002327956s, waiting for 1m20s May 6 23:40:26.263: INFO: node status heartbeat is unchanged for 2.004822456s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:40:26.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3132" for this suite. • [SLOW TEST:300.053 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":6,"skipped":1070,"failed":0} May 6 23:40:26.283: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:33:11.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0506 23:33:11.735570 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:33:11.735: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:33:11.737: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 May 6 23:33:11.752: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:13.755: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:15.758: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:17.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:19.755: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:21.755: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:23.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:25.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:27.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:29.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:31.756: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 6 23:33:33.756: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped May 6 23:45:10.227: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-05-06 23:39:59 +0000 UTC restartedAt=2022-05-06 23:45:09 +0000 UTC (5m10s) May 6 23:50:16.611: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-05-06 23:45:14 +0000 UTC restartedAt=2022-05-06 23:50:15 +0000 UTC (5m1s) May 6 23:55:33.043: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-05-06 23:50:20 +0000 UTC restartedAt=2022-05-06 23:55:31 +0000 UTC (5m11s) STEP: getting restart delay after a capped delay May 7 00:00:49.564: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-05-06 23:55:36 +0000 UTC restartedAt=2022-05-07 00:00:48 +0000 UTC (5m12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:00:49.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8879" for this suite. • [SLOW TEST:1657.869 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":123,"failed":0} May 7 00:00:49.579: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":9,"skipped":728,"failed":0} May 6 23:35:36.207: INFO: Running AfterSuite actions on all nodes May 7 00:00:49.618: INFO: Running AfterSuite actions on node 1 May 7 00:00:49.618: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5773 Specs in 1658.587 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m40.181606519s Test Suite Failed