Running Suite: Kubernetes e2e suite =================================== Random Seed: 1650669713 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 22 23:21:55.068: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.073: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 23:21:55.100: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:21:55.168: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:21:55.168: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:21:55.168: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:21:55.168: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:21:55.168: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 23:21:55.187: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 22 23:21:55.187: INFO: e2e test version: v1.21.9 Apr 22 23:21:55.188: INFO: kube-apiserver version: v1.21.1 Apr 22 23:21:55.189: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.194: INFO: Cluster IP family: ipv4 Apr 22 23:21:55.195: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.218: INFO: Cluster IP family: ipv4 Apr 22 23:21:55.203: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.223: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 22 23:21:55.205: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.226: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Apr 22 23:21:55.215: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.235: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Apr 22 23:21:55.213: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.237: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 22 23:21:55.214: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.237: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 22 23:21:55.219: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.239: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Apr 22 23:21:55.225: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.246: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Apr 22 23:21:55.226: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:21:55.249: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0422 23:21:55.335200 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.335: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.337: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:21:55.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-92" for this suite. •SSSSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0422 23:21:55.499956 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.500: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.502: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:21:55.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9998" for this suite. •SSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor W0422 23:21:55.508835 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.509: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.510: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 22 23:21:55.513: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:21:55.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3592" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector W0422 23:21:55.534900 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.535: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.536: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Apr 22 23:21:55.539: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:21:55.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-4782" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0422 23:21:55.345647 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.345: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.347: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:01.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9478" for this suite. • [SLOW TEST:6.074 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W0422 23:21:55.570611 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.570: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.572: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 22 23:21:55.580: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Apr 22 23:21:55.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9374 create -f -' Apr 22 23:21:56.163: INFO: stderr: "" Apr 22 23:21:56.163: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Apr 22 23:22:02.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9374 logs dapi-test-pod test-container' Apr 22 23:22:02.365: INFO: stderr: "" Apr 22 23:22:02.365: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9374\nMY_POD_IP=10.244.4.134\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Apr 22 23:22:02.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9374 logs dapi-test-pod test-container' Apr 22 23:22:02.625: INFO: stderr: "" Apr 22 23:22:02.625: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9374\nMY_POD_IP=10.244.4.134\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:02.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9374" for this suite. • [SLOW TEST:7.085 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:02.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-7798/configmap-test-703de62e-3ba9-4b7b-aea8-fb8474bf3be7 STEP: Updating configMap configmap-7798/configmap-test-703de62e-3ba9-4b7b-aea8-fb8474bf3be7 STEP: Verifying update of ConfigMap configmap-7798/configmap-test-703de62e-3ba9-4b7b-aea8-fb8474bf3be7 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:02.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7798" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":2,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0422 23:21:55.275933 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.276: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.279: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-3bae49a3-4bd1-460c-8396-d7724ac0bc5d in namespace container-probe-3694 Apr 22 23:22:01.303: INFO: Started pod startup-override-3bae49a3-4bd1-460c-8396-d7724ac0bc5d in namespace container-probe-3694 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:01.306: INFO: Initial restart count of pod startup-override-3bae49a3-4bd1-460c-8396-d7724ac0bc5d is 0 Apr 22 23:22:03.312: INFO: Restart count of pod container-probe-3694/startup-override-3bae49a3-4bd1-460c-8396-d7724ac0bc5d is now 1 (2.006268149s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:03.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3694" for this suite. • [SLOW TEST:8.087 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0422 23:21:55.649640 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.649: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.651: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Apr 22 23:21:55.667: INFO: Waiting up to 5m0s for pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f" in namespace "downward-api-1050" to be "Succeeded or Failed" Apr 22 23:21:55.669: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120251ms Apr 22 23:21:57.673: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005370324s Apr 22 23:21:59.677: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009502932s Apr 22 23:22:01.681: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01372282s Apr 22 23:22:03.685: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017168474s STEP: Saw pod success Apr 22 23:22:03.685: INFO: Pod "downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f" satisfied condition "Succeeded or Failed" Apr 22 23:22:03.687: INFO: Trying to get logs from node node1 pod downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f container dapi-container: STEP: delete the pod Apr 22 23:22:03.699: INFO: Waiting for pod downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f to disappear Apr 22 23:22:03.701: INFO: Pod downward-api-96f0225a-616f-4ee4-acf1-0b1879de3a4f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:03.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1050" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 23:21:55.603: INFO: Waiting up to 5m0s for pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd" in namespace "security-context-4105" to be "Succeeded or Failed" Apr 22 23:21:55.606: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540523ms Apr 22 23:21:57.610: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006516609s Apr 22 23:21:59.614: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01026085s Apr 22 23:22:01.618: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014920459s Apr 22 23:22:03.622: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018350664s STEP: Saw pod success Apr 22 23:22:03.622: INFO: Pod "security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd" satisfied condition "Succeeded or Failed" Apr 22 23:22:03.624: INFO: Trying to get logs from node node2 pod security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd container test-container: STEP: delete the pod Apr 22 23:22:03.755: INFO: Waiting for pod security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd to disappear Apr 22 23:22:03.757: INFO: Pod security-context-ea5bb13c-751e-453e-81d7-4edb95a940fd no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4105" for this suite. • [SLOW TEST:8.191 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0422 23:21:55.229806 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.230: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.233: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 22 23:22:04.285: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:04.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5188" for this suite. • [SLOW TEST:9.101 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0422 23:21:55.540977 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:21:55.541: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:21:55.543: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-b5661e5f-b5a0-422c-a2f0-f493295cf17f in namespace container-probe-5854 Apr 22 23:22:03.562: INFO: Started pod liveness-override-b5661e5f-b5a0-422c-a2f0-f493295cf17f in namespace container-probe-5854 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:03.565: INFO: Initial restart count of pod liveness-override-b5661e5f-b5a0-422c-a2f0-f493295cf17f is 0 Apr 22 23:22:05.571: INFO: Restart count of pod container-probe-5854/liveness-override-b5661e5f-b5a0-422c-a2f0-f493295cf17f is now 1 (2.006239723s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:05.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5854" for this suite. • [SLOW TEST:10.073 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:01.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Apr 22 23:22:01.549: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478" in namespace "security-context-test-1698" to be "Succeeded or Failed" Apr 22 23:22:01.552: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080074ms Apr 22 23:22:03.556: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006143033s Apr 22 23:22:05.560: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010430404s Apr 22 23:22:07.564: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014720365s Apr 22 23:22:09.568: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018074103s Apr 22 23:22:09.568: INFO: Pod "alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:09.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1698" for this suite. • [SLOW TEST:8.079 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:09.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 22 23:22:09.725: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:09.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-5338" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Apr 22 23:21:55.976: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4" in namespace "security-context-test-3690" to be "Succeeded or Failed" Apr 22 23:21:55.979: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51819ms Apr 22 23:21:57.982: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005453288s Apr 22 23:21:59.987: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011146439s Apr 22 23:22:01.991: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015365932s Apr 22 23:22:03.995: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019114946s Apr 22 23:22:05.999: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023118589s Apr 22 23:22:08.003: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027136935s Apr 22 23:22:10.009: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4": Phase="Failed", Reason="", readiness=false. Elapsed: 14.032672459s Apr 22 23:22:10.009: INFO: Pod "busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:10.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3690" for this suite. • [SLOW TEST:14.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:10.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E0422 23:22:12.308390 35 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 263 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x654af00, 0x9c066c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc001c44f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00151f600, 0xc001c44f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000742210, 0xc00151f600, 0xc0022db440, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000742210, 0xc00151f600, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000742210, 0xc00151f600, 0xc000742210, 0xc00151f600) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00151f600, 0x14, 0xc004897740) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc002518420, 0xc00072de00, 0x14, 0xc004897740, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0011dc0c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0011dc0c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000a132c0, 0x76a2fe0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003767680, 0x0, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003767680, 0x76a2fe0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0006cb040, 0xc003767680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0006cb040, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0006cb040, 0xc0045ec880) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f32746d1610, 0xc00340b200, 0x6f170c8, 0x14, 0xc0017efd10, 0x3, 0x3, 0x7759478, 0xc000190840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x76a80c0, 0xc00340b200, 0x6f170c8, 0x14, 0xc00231f140, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x76a80c0, 0xc00340b200, 0x6f170c8, 0x14, 0xc0029f9200, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00340b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00340b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00340b200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-4089". STEP: Found 2 events. Apr 22 23:22:12.311: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-754bf168-ebaf-4d9c-8189-93021aea57cb: { } Scheduled: Successfully assigned container-probe-4089/startup-754bf168-ebaf-4d9c-8189-93021aea57cb to node2 Apr 22 23:22:12.311: INFO: At 2022-04-22 23:22:12 +0000 UTC - event for startup-754bf168-ebaf-4d9c-8189-93021aea57cb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Apr 22 23:22:12.313: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 23:22:12.313: INFO: startup-754bf168-ebaf-4d9c-8189-93021aea57cb node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:22:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:22:10 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:22:10 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:22:10 +0000 UTC }] Apr 22 23:22:12.313: INFO: Apr 22 23:22:12.318: INFO: Logging node info for node master1 Apr 22 23:22:12.320: INFO: Node Info: &Node{ObjectMeta:{master1 70710064-7222-41b1-b51e-81deaa6e7014 76194 0 2022-04-22 19:56:45 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:04 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:04 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:04 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:22:04 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 23:22:12.321: INFO: Logging kubelet events for node master1 Apr 22 23:22:12.323: INFO: Logging pods the kubelet thinks is on node master1 Apr 22 23:22:12.354: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 23:22:12.354: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 23:22:12.354: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:22:12.354: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.354: INFO: Container prometheus-operator ready: true, restart count 0 Apr 22 23:22:12.354: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-scheduler ready: true, restart count 0 Apr 22 23:22:12.354: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:22:12.354: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Init container install-cni ready: true, restart count 0 Apr 22 23:22:12.354: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 23:22:12.354: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.354: INFO: Container autoscaler ready: true, restart count 2 Apr 22 23:22:12.354: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.354: INFO: Container docker-registry ready: true, restart count 0 Apr 22 23:22:12.354: INFO: Container nginx ready: true, restart count 0 Apr 22 23:22:12.354: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.354: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.354: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:22:12.440: INFO: Latency metrics for node master1 Apr 22 23:22:12.440: INFO: Logging node info for node master2 Apr 22 23:22:12.442: INFO: Node Info: &Node{ObjectMeta:{master2 4a346a45-ed0b-49d9-a2ad-b419d2c4705c 76384 0 2022-04-22 19:57:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 23:22:12.443: INFO: Logging kubelet events for node master2 Apr 22 23:22:12.447: INFO: Logging pods the kubelet thinks is on node master2 Apr 22 23:22:12.461: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.461: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:22:12.461: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 23:22:12.461: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 22 23:22:12.461: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:22:12.461: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container nfd-controller ready: true, restart count 0 Apr 22 23:22:12.461: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-scheduler ready: true, restart count 1 Apr 22 23:22:12.461: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Init container install-cni ready: true, restart count 0 Apr 22 23:22:12.461: INFO: Container kube-flannel ready: true, restart count 1 Apr 22 23:22:12.461: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:22:12.461: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.461: INFO: Container coredns ready: true, restart count 1 Apr 22 23:22:12.544: INFO: Latency metrics for node master2 Apr 22 23:22:12.544: INFO: Logging node info for node master3 Apr 22 23:22:12.547: INFO: Node Info: &Node{ObjectMeta:{master3 43c25e47-7b5c-4cf0-863e-39d16b72dcb3 76395 0 2022-04-22 19:57:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:22:10 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 23:22:12.548: INFO: Logging kubelet events for node master3 Apr 22 23:22:12.550: INFO: Logging pods the kubelet thinks is on node master3 Apr 22 23:22:12.565: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.565: INFO: Container kube-proxy ready: true, restart count 1 Apr 22 23:22:12.565: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:12.565: INFO: Init container install-cni ready: true, restart count 0 Apr 22 23:22:12.565: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:22:12.565: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.565: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:22:12.565: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.565: INFO: Container coredns ready: true, restart count 1 Apr 22 23:22:12.565: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.565: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.565: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:22:12.565: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.566: INFO: Container kube-apiserver ready: true, restart count 0 Apr 22 23:22:12.566: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.566: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 22 23:22:12.566: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.566: INFO: Container kube-scheduler ready: true, restart count 2 Apr 22 23:22:12.651: INFO: Latency metrics for node master3 Apr 22 23:22:12.651: INFO: Logging node info for node node1 Apr 22 23:22:12.654: INFO: Node Info: &Node{ObjectMeta:{node1 e0ec3d42-4e2e-47e3-b369-98011b25b39b 76217 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-04-22 22:25:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-04-22 22:25:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:05 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:22:05 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 23:22:12.655: INFO: Logging kubelet events for node node1 Apr 22 23:22:12.657: INFO: Logging pods the kubelet thinks is on node node1 Apr 22 23:22:12.679: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded) Apr 22 23:22:12.679: INFO: Container discover ready: false, restart count 0 Apr 22 23:22:12.679: INFO: Container init ready: false, restart count 0 Apr 22 23:22:12.679: INFO: Container install ready: false, restart count 0 Apr 22 23:22:12.679: INFO: startup-75196bbc-048c-4e1b-b1fb-2332ccd3f16d started at 2022-04-22 23:21:55 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container busybox ready: false, restart count 0 Apr 22 23:22:12.679: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:22:12.679: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.679: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:22:12.679: INFO: busybox-33a610a3-f7ab-471b-83df-3db7781edd19 started at 2022-04-22 23:22:05 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container busybox ready: false, restart count 0 Apr 22 23:22:12.679: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:22:12.679: INFO: busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4 started at 2022-04-22 23:21:55 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container busybox-readonly-true-bd74365c-b703-4c1e-9c76-d59ebc83f0a4 ready: false, restart count 0 Apr 22 23:22:12.679: INFO: pod-prestop-hook-6ed9a3be-33e2-40e1-a9d0-95d1143c1316 started at 2022-04-22 23:21:56 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container nginx ready: true, restart count 0 Apr 22 23:22:12.679: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:12.679: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:22:12.679: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded) Apr 22 23:22:12.679: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container grafana ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:22:12.679: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 23:22:12.679: INFO: Container collectd ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:22:12.679: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:22:12.679: INFO: implicit-nonroot-uid started at 2022-04-22 23:22:10 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container implicit-nonroot-uid ready: false, restart count 0 Apr 22 23:22:12.679: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:22:12.679: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:22:12.679: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:22:12.679: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:22:12.679: INFO: pod-submit-status-1-0 started at 2022-04-22 23:22:04 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container busybox ready: false, restart count 0 Apr 22 23:22:12.679: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Init container install-cni ready: true, restart count 2 Apr 22 23:22:12.679: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:22:12.679: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:12.679: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:22:12.679: INFO: pod-submit-status-2-1 started at (0+0 container statuses recorded) Apr 22 23:22:13.209: INFO: Latency metrics for node node1 Apr 22 23:22:13.209: INFO: Logging node info for node node2 Apr 22 23:22:13.213: INFO: Node Info: &Node{ObjectMeta:{node2 ef89f5d1-0c69-4be8-a041-8437402ef215 76256 0 2022-04-22 19:58:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 22:25:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-22 22:42:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:07 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:07 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:22:07 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:22:07 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 22 23:22:13.214: INFO: Logging kubelet events for node node2 Apr 22 23:22:13.216: INFO: Logging pods the kubelet thinks is on node node2 Apr 22 23:22:13.231: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:13.231: INFO: Init container install-cni ready: true, restart count 0 Apr 22 23:22:13.231: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:22:13.231: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded) Apr 22 23:22:13.231: INFO: Container discover ready: false, restart count 0 Apr 22 23:22:13.231: INFO: Container init ready: false, restart count 0 Apr 22 23:22:13.231: INFO: Container install ready: false, restart count 0 Apr 22 23:22:13.231: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:13.231: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:22:13.231: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:22:13.231: INFO: busybox-3baf5fbf-8e0d-450d-ad31-0b6a374ea6a1 started at 2022-04-22 23:22:04 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.231: INFO: Container busybox ready: true, restart count 0 Apr 22 23:22:13.231: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.231: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:22:13.231: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.231: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:22:13.232: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:22:13.232: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded) Apr 22 23:22:13.232: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:22:13.232: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:22:13.232: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:22:13.232: INFO: pod-back-off-image started at 2022-04-22 23:22:03 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container back-off ready: true, restart count 0 Apr 22 23:22:13.232: INFO: pod-submit-status-0-1 started at 2022-04-22 23:22:09 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container busybox ready: false, restart count 0 Apr 22 23:22:13.232: INFO: startup-754bf168-ebaf-4d9c-8189-93021aea57cb started at 2022-04-22 23:22:10 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container busybox ready: false, restart count 0 Apr 22 23:22:13.232: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:22:13.232: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:22:13.232: INFO: alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478 started at 2022-04-22 23:22:01 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container alpine-nnp-nil-463ea6af-609b-4874-87af-4ed942237478 ready: false, restart count 0 Apr 22 23:22:13.232: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:22:13.232: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded) Apr 22 23:22:13.232: INFO: Container collectd ready: true, restart count 0 Apr 22 23:22:13.232: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:22:13.232: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:22:13.232: INFO: back-off-cap started at 2022-04-22 23:22:03 +0000 UTC (0+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Container back-off-cap ready: true, restart count 0 Apr 22 23:22:13.232: INFO: pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0 started at 2022-04-22 23:22:03 +0000 UTC (1+1 container statuses recorded) Apr 22 23:22:13.232: INFO: Init container foo ready: true, restart count 0 Apr 22 23:22:13.232: INFO: Container bar ready: false, restart count 0 Apr 22 23:22:13.491: INFO: Latency metrics for node node2 Apr 22 23:22:13.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4089" for this suite. •! Panic [3.232 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x654af00, 0x9c066c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc001c44f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00151f600, 0xc001c44f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000742210, 0xc00151f600, 0xc0022db440, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000742210, 0xc00151f600, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000742210, 0xc00151f600, 0xc000742210, 0xc00151f600) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00151f600, 0x14, 0xc004897740) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x77b33d8, 0xc002518420, 0xc00072de00, 0x14, 0xc004897740, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00340b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00340b200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00340b200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:03.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Apr 22 23:22:03.131: INFO: Waiting up to 5m0s for pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0" in namespace "pods-983" to be "Succeeded or Failed" Apr 22 23:22:03.133: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250693ms Apr 22 23:22:05.136: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004713921s Apr 22 23:22:07.140: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009326971s Apr 22 23:22:09.148: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016677779s Apr 22 23:22:11.154: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023286972s Apr 22 23:22:13.160: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.028618321s STEP: Saw pod success Apr 22 23:22:13.160: INFO: Pod "pod-always-succeedd97d5409-0751-42d4-8ab2-b0564be405d0" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:15.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-983" for this suite. • [SLOW TEST:12.079 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:15.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Apr 22 23:22:15.237: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:15.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-3221" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:13.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:17.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5294" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:09.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Apr 22 23:22:10.027: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3382" to be "Succeeded or Failed" Apr 22 23:22:10.029: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431494ms Apr 22 23:22:12.032: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005544854s Apr 22 23:22:14.035: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008542266s Apr 22 23:22:16.040: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012947235s Apr 22 23:22:18.045: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018457236s Apr 22 23:22:18.045: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:18.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3382" for this suite. • [SLOW TEST:8.087 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:17.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:19.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2730" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":4,"skipped":506,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:19.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Apr 22 23:22:19.895: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:19.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-9752" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:15.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:20.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9396" for this suite. • [SLOW TEST:5.068 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:19.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 22 23:22:19.034: INFO: Waiting up to 5m0s for pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39" in namespace "security-context-3786" to be "Succeeded or Failed" Apr 22 23:22:19.036: INFO: Pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39": Phase="Pending", Reason="", readiness=false. Elapsed: 1.96906ms Apr 22 23:22:21.039: INFO: Pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005763792s Apr 22 23:22:23.047: INFO: Pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013918987s Apr 22 23:22:25.054: INFO: Pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020884053s STEP: Saw pod success Apr 22 23:22:25.055: INFO: Pod "security-context-9084d66f-4d01-4980-962b-25f93f447c39" satisfied condition "Succeeded or Failed" Apr 22 23:22:25.057: INFO: Trying to get logs from node node2 pod security-context-9084d66f-4d01-4980-962b-25f93f447c39 container test-container: STEP: delete the pod Apr 22 23:22:25.069: INFO: Waiting for pod security-context-9084d66f-4d01-4980-962b-25f93f447c39 to disappear Apr 22 23:22:25.071: INFO: Pod security-context-9084d66f-4d01-4980-962b-25f93f447c39 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:25.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3786" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:20.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Apr 22 23:22:20.685: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b" in namespace "security-context-test-9636" to be "Succeeded or Failed" Apr 22 23:22:20.687: INFO: Pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117468ms Apr 22 23:22:22.691: INFO: Pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005337262s Apr 22 23:22:24.695: INFO: Pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010085077s Apr 22 23:22:26.702: INFO: Pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016334855s Apr 22 23:22:26.702: INFO: Pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b" satisfied condition "Succeeded or Failed" Apr 22 23:22:26.811: INFO: Got logs for pod "busybox-privileged-true-e992f983-b6ae-45b8-b38d-5ad9716f621b": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:26.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9636" for this suite. • [SLOW TEST:6.166 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Apr 22 23:22:28.071: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:28.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7653" for this suite. • [SLOW TEST:32.086 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":2,"skipped":297,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:28.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:38.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2809" for this suite. • [SLOW TEST:10.043 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":3,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:25.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Apr 22 23:22:25.164: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:27.168: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:29.168: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:31.170: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:33.169: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:35.168: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:37.168: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Apr 22 23:22:37.170: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5850 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:22:37.170: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:22:38.563: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-5850 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:22:38.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Apr 22 23:22:39.216: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5850 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:22:39.216: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:39.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-5850" for this suite. • [SLOW TEST:14.239 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":5,"skipped":785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Apr 22 23:22:38.382: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2502" to be "Succeeded or Failed" Apr 22 23:22:38.391: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58535ms Apr 22 23:22:40.394: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011695965s Apr 22 23:22:42.397: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01477207s Apr 22 23:22:44.402: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019521577s Apr 22 23:22:46.408: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026197228s Apr 22 23:22:48.412: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.029787035s Apr 22 23:22:48.412: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:22:48.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2502" for this suite. • [SLOW TEST:10.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:39.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false Apr 22 23:23:01.622: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Apr 22 23:23:02.622: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Apr 22 23:23:03.622: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Apr 22 23:23:04.621: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Apr 22 23:23:05.623: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:06.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3829" for this suite. • [SLOW TEST:27.081 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":6,"skipped":882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:05.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-33a610a3-f7ab-471b-83df-3db7781edd19 in namespace container-probe-4688 Apr 22 23:22:13.745: INFO: Started pod busybox-33a610a3-f7ab-471b-83df-3db7781edd19 in namespace container-probe-4688 Apr 22 23:22:13.745: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (1.654µs elapsed) Apr 22 23:22:15.745: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (2.000732169s elapsed) Apr 22 23:22:17.747: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (4.002015939s elapsed) Apr 22 23:22:19.748: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (6.003610748s elapsed) Apr 22 23:22:21.751: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (8.006445814s elapsed) Apr 22 23:22:23.752: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (10.006956387s elapsed) Apr 22 23:22:25.752: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (12.007628708s elapsed) Apr 22 23:22:27.758: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (14.013120505s elapsed) Apr 22 23:22:29.760: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (16.015233231s elapsed) Apr 22 23:22:31.762: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (18.017188718s elapsed) Apr 22 23:22:33.763: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (20.017982894s elapsed) Apr 22 23:22:35.764: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (22.018960127s elapsed) Apr 22 23:22:37.765: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (24.020466704s elapsed) Apr 22 23:22:39.767: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (26.022459701s elapsed) Apr 22 23:22:41.769: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (28.024008485s elapsed) Apr 22 23:22:43.770: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (30.024825422s elapsed) Apr 22 23:22:45.771: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (32.026027238s elapsed) Apr 22 23:22:47.777: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (34.032105269s elapsed) Apr 22 23:22:49.778: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (36.03366987s elapsed) Apr 22 23:22:51.780: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (38.035151974s elapsed) Apr 22 23:22:53.781: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (40.036028068s elapsed) Apr 22 23:22:55.782: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (42.037595797s elapsed) Apr 22 23:22:57.784: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (44.039416651s elapsed) Apr 22 23:22:59.788: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (46.043404138s elapsed) Apr 22 23:23:01.790: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (48.045269388s elapsed) Apr 22 23:23:03.791: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (50.046234458s elapsed) Apr 22 23:23:05.792: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (52.047207652s elapsed) Apr 22 23:23:07.793: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (54.048446144s elapsed) Apr 22 23:23:09.794: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (56.049399975s elapsed) Apr 22 23:23:11.796: INFO: pod container-probe-4688/busybox-33a610a3-f7ab-471b-83df-3db7781edd19 is not ready (58.050820508s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:13.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4688" for this suite. • [SLOW TEST:68.104 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:04.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-3baf5fbf-8e0d-450d-ad31-0b6a374ea6a1 in namespace container-probe-6843 Apr 22 23:22:12.376: INFO: Started pod busybox-3baf5fbf-8e0d-450d-ad31-0b6a374ea6a1 in namespace container-probe-6843 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:12.378: INFO: Initial restart count of pod busybox-3baf5fbf-8e0d-450d-ad31-0b6a374ea6a1 is 0 Apr 22 23:23:16.528: INFO: Restart count of pod container-probe-6843/busybox-3baf5fbf-8e0d-450d-ad31-0b6a374ea6a1 is now 1 (1m4.150516049s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:16.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6843" for this suite. • [SLOW TEST:72.207 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:26.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb in namespace kubelet-960 I0422 23:22:27.060512 28 runners.go:190] Created replication controller with name: cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb, namespace: kubelet-960, replica count: 20 I0422 23:22:37.111845 28 runners.go:190] cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb Pods: 20 out of 20 created, 2 running, 18 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 23:22:47.112000 28 runners.go:190] cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb Pods: 20 out of 20 created, 18 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 23:22:57.113489 28 runners.go:190] cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 23:22:58.114: INFO: Checking pods on node node2 via /runningpods endpoint Apr 22 23:22:58.114: INFO: Checking pods on node node1 via /runningpods endpoint Apr 22 23:22:58.145: INFO: Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.491 5097.71 1848.45 "runtime" 0.110 700.75 314.70 "kubelet" 0.110 700.75 314.70 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.372 3604.92 1558.16 "runtime" 0.120 617.44 264.29 "kubelet" 0.120 617.44 264.29 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.315 3536.00 1530.69 "runtime" 0.104 523.20 238.65 "kubelet" 0.104 523.20 238.65 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.598 6171.88 2164.76 "runtime" 0.768 2563.14 549.02 "kubelet" 0.768 2563.14 549.02 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.813 1620.20 624.97 "kubelet" 0.813 1620.20 624.97 "/" 1.523 4034.93 1167.15 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb in namespace kubelet-960, will wait for the garbage collector to delete the pods Apr 22 23:22:58.202: INFO: Deleting ReplicationController cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb took: 4.819805ms Apr 22 23:22:58.803: INFO: Terminating ReplicationController cleanup20-776c8aef-26a9-45bd-ace6-9096c44566fb pods took: 600.877667ms Apr 22 23:23:19.005: INFO: Checking pods on node node2 via /runningpods endpoint Apr 22 23:23:19.005: INFO: Checking pods on node node1 via /runningpods endpoint Apr 22 23:23:19.209: INFO: Deleting 20 pods on 2 nodes completed in 1.204333917s after the RC was deleted Apr 22 23:23:19.209: INFO: CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.315 0.317 0.337 0.358 0.358 0.358 "runtime" 0.000 0.000 0.104 0.104 0.104 0.104 0.104 "kubelet" 0.000 0.000 0.104 0.104 0.104 0.104 0.104 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.598 1.175 1.333 1.988 1.988 1.988 "runtime" 0.000 0.000 0.404 0.768 0.768 0.768 0.768 "kubelet" 0.000 0.000 0.404 0.768 0.768 0.768 0.768 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 1.363 1.422 1.523 1.803 1.803 1.803 "runtime" 0.000 0.000 0.813 0.944 0.944 0.944 0.944 "kubelet" 0.000 0.000 0.813 0.944 0.944 0.944 0.944 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.442 0.478 0.491 0.512 0.512 0.512 "runtime" 0.000 0.000 0.115 0.115 0.124 0.124 0.124 "kubelet" 0.000 0.000 0.115 0.115 0.124 0.124 0.124 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.301 0.331 0.372 0.374 0.374 0.374 "runtime" 0.000 0.000 0.103 0.103 0.120 0.120 0.120 "kubelet" 0.000 0.000 0.103 0.103 0.120 0.120 0.120 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:19.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-960" for this suite. • [SLOW TEST:52.261 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":6,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:20.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-33a78bac-2d00-445c-9cb4-0121b60d71ca in namespace container-probe-4789 Apr 22 23:22:24.076: INFO: Started pod startup-33a78bac-2d00-445c-9cb4-0121b60d71ca in namespace container-probe-4789 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:24.078: INFO: Initial restart count of pod startup-33a78bac-2d00-445c-9cb4-0121b60d71ca is 0 Apr 22 23:23:22.195: INFO: Restart count of pod container-probe-4789/startup-33a78bac-2d00-445c-9cb4-0121b60d71ca is now 1 (58.11687932s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:22.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4789" for this suite. • [SLOW TEST:62.173 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":5,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:16.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2376" for this suite. • [SLOW TEST:6.051 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:06.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Apr 22 23:23:06.764: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:08.769: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:10.770: INFO: The status of Pod master is Running (Ready = true) Apr 22 23:23:10.785: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:12.788: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:14.788: INFO: The status of Pod slave is Running (Ready = true) Apr 22 23:23:14.804: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:16.812: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:18.808: INFO: The status of Pod private is Running (Ready = true) Apr 22 23:23:18.836: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:20.839: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:23:22.840: INFO: The status of Pod default is Running (Ready = true) Apr 22 23:23:22.844: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:22.845: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.017: INFO: Exec stderr: "" Apr 22 23:23:23.019: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.019: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.109: INFO: Exec stderr: "" Apr 22 23:23:23.112: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.112: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.198: INFO: Exec stderr: "" Apr 22 23:23:23.200: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.200: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.294: INFO: Exec stderr: "" Apr 22 23:23:23.297: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.297: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.397: INFO: Exec stderr: "" Apr 22 23:23:23.399: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.399: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.475: INFO: Exec stderr: "" Apr 22 23:23:23.478: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.478: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.565: INFO: Exec stderr: "" Apr 22 23:23:23.568: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.569: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.674: INFO: Exec stderr: "" Apr 22 23:23:23.676: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.676: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.754: INFO: Exec stderr: "" Apr 22 23:23:23.757: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.757: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.840: INFO: Exec stderr: "" Apr 22 23:23:23.842: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.842: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:23.923: INFO: Exec stderr: "" Apr 22 23:23:23.925: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:23.925: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.015: INFO: Exec stderr: "" Apr 22 23:23:24.017: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.017: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.099: INFO: Exec stderr: "" Apr 22 23:23:24.102: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.102: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.187: INFO: Exec stderr: "" Apr 22 23:23:24.190: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.190: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.304: INFO: Exec stderr: "" Apr 22 23:23:24.306: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.306: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.405: INFO: Exec stderr: "" Apr 22 23:23:24.407: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.408: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.559: INFO: Exec stderr: "" Apr 22 23:23:24.561: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.561: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.714: INFO: Exec stderr: "" Apr 22 23:23:24.717: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.717: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.811: INFO: Exec stderr: "" Apr 22 23:23:24.814: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:24.814: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:24.953: INFO: Exec stderr: "" Apr 22 23:23:28.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-2948"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-2948"/host; echo host > "/var/lib/kubelet/mount-propagation-2948"/host/file] Namespace:mount-propagation-2948 PodName:hostexec-node1-8tr77 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 22 23:23:28.971: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.138: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.138: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.310: INFO: pod master mount master: stdout: "master", stderr: "" error: Apr 22 23:23:29.312: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.312: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.403: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:29.406: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.406: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.585: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:29.588: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.588: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.724: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:29.727: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.727: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:29.817: INFO: pod master mount host: stdout: "host", stderr: "" error: Apr 22 23:23:29.820: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:29.820: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.299: INFO: pod slave mount master: stdout: "master", stderr: "" error: Apr 22 23:23:30.302: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.302: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.389: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Apr 22 23:23:30.392: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.392: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.486: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:30.489: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.489: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.585: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:30.588: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.588: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.681: INFO: pod slave mount host: stdout: "host", stderr: "" error: Apr 22 23:23:30.684: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.684: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.796: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:30.799: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.799: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.905: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:30.908: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.908: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:30.996: INFO: pod private mount private: stdout: "private", stderr: "" error: Apr 22 23:23:30.999: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:30.999: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.083: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.086: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.086: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.184: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.188: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.188: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.278: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.281: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.281: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.385: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.387: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.387: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.473: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.476: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.476: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.562: INFO: pod default mount default: stdout: "default", stderr: "" error: Apr 22 23:23:31.565: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.565: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.657: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 22 23:23:31.657: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-2948"/master/file` = master] Namespace:mount-propagation-2948 PodName:hostexec-node1-8tr77 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 22 23:23:31.657: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.747: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-2948"/slave/file] Namespace:mount-propagation-2948 PodName:hostexec-node1-8tr77 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 22 23:23:31.747: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.840: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-2948"/host] Namespace:mount-propagation-2948 PodName:hostexec-node1-8tr77 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 22 23:23:31.840: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:31.953: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-2948 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:31.953: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:32.070: INFO: Exec stderr: "" Apr 22 23:23:32.072: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-2948 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:32.072: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:32.193: INFO: Exec stderr: "" Apr 22 23:23:32.196: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-2948 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:32.196: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:32.297: INFO: Exec stderr: "" Apr 22 23:23:32.299: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-2948 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 23:23:32.299: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:23:32.402: INFO: Exec stderr: "" Apr 22 23:23:32.402: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-2948"] Namespace:mount-propagation-2948 PodName:hostexec-node1-8tr77 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 22 23:23:32.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-8tr77 in namespace mount-propagation-2948 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:32.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-2948" for this suite. • [SLOW TEST:25.784 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":7,"skipped":921,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:22.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 22 23:23:22.964: INFO: Waiting up to 5m0s for pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252" in namespace "security-context-8929" to be "Succeeded or Failed" Apr 22 23:23:22.966: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 1.982277ms Apr 22 23:23:24.971: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00652389s Apr 22 23:23:26.975: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010700314s Apr 22 23:23:28.978: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013439242s Apr 22 23:23:30.981: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016515885s Apr 22 23:23:32.985: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020599592s Apr 22 23:23:34.989: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025216409s STEP: Saw pod success Apr 22 23:23:34.990: INFO: Pod "security-context-35280f3b-84e9-4487-819a-2da81d1d4252" satisfied condition "Succeeded or Failed" Apr 22 23:23:34.993: INFO: Trying to get logs from node node2 pod security-context-35280f3b-84e9-4487-819a-2da81d1d4252 container test-container: STEP: delete the pod Apr 22 23:23:35.036: INFO: Waiting for pod security-context-35280f3b-84e9-4487-819a-2da81d1d4252 to disappear Apr 22 23:23:35.039: INFO: Pod security-context-35280f3b-84e9-4487-819a-2da81d1d4252 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:35.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8929" for this suite. • [SLOW TEST:12.117 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":4,"skipped":617,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:23.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-2129086f-12fc-40df-aaaa-39eaed0fd79f bar STEP: verifying the node has the label fizz-fd8a7671-5fa3-4c8d-9143-5770f4512a81 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-fd8a7671-5fa3-4c8d-9143-5770f4512a81 off the node node2 STEP: verifying the node doesn't have the label fizz-fd8a7671-5fa3-4c8d-9143-5770f4512a81 STEP: removing the label foo-2129086f-12fc-40df-aaaa-39eaed0fd79f off the node node2 STEP: verifying the node doesn't have the label foo-2129086f-12fc-40df-aaaa-39eaed0fd79f [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:45.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9905" for this suite. • [SLOW TEST:22.118 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":6,"skipped":1082,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:19.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-0686ca36-e1ef-4668-89b0-6671157c02f5 in namespace container-probe-984 Apr 22 23:23:29.735: INFO: Started pod liveness-0686ca36-e1ef-4668-89b0-6671157c02f5 in namespace container-probe-984 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:23:29.738: INFO: Initial restart count of pod liveness-0686ca36-e1ef-4668-89b0-6671157c02f5 is 0 Apr 22 23:23:47.780: INFO: Restart count of pod container-probe-984/liveness-0686ca36-e1ef-4668-89b0-6671157c02f5 is now 1 (18.042786639s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:47.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-984" for this suite. • [SLOW TEST:28.096 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":7,"skipped":818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:35.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:48.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5313" for this suite. • [SLOW TEST:13.103 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":5,"skipped":625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:45.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Apr 22 23:23:45.335: INFO: Waiting up to 5m0s for pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44" in namespace "security-context-test-1383" to be "Succeeded or Failed" Apr 22 23:23:45.337: INFO: Pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44": Phase="Pending", Reason="", readiness=false. Elapsed: 1.902324ms Apr 22 23:23:47.342: INFO: Pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006386303s Apr 22 23:23:49.345: INFO: Pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009718568s Apr 22 23:23:51.351: INFO: Pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015953868s Apr 22 23:23:51.351: INFO: Pod "busybox-user-0-2bc41cfc-53b2-4d95-8e17-2b5c33329d44" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:51.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1383" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":1097,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:51.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 23:23:51.457: INFO: Waiting up to 5m0s for pod "security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124" in namespace "security-context-4765" to be "Succeeded or Failed" Apr 22 23:23:51.460: INFO: Pod "security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372383ms Apr 22 23:23:53.464: INFO: Pod "security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006503277s Apr 22 23:23:55.468: INFO: Pod "security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010290478s STEP: Saw pod success Apr 22 23:23:55.468: INFO: Pod "security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124" satisfied condition "Succeeded or Failed" Apr 22 23:23:55.470: INFO: Trying to get logs from node node1 pod security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124 container test-container: STEP: delete the pod Apr 22 23:23:55.484: INFO: Waiting for pod security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124 to disappear Apr 22 23:23:55.486: INFO: Pod security-context-3769544f-c9e9-48ae-8e47-d9e4996d2124 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:55.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4765" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:48.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Apr 22 23:23:48.385: INFO: Waiting up to 5m0s for pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140" in namespace "security-context-113" to be "Succeeded or Failed" Apr 22 23:23:48.387: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140": Phase="Pending", Reason="", readiness=false. Elapsed: 1.981663ms Apr 22 23:23:50.390: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004970753s Apr 22 23:23:52.394: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008407525s Apr 22 23:23:54.397: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012232575s Apr 22 23:23:56.402: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016352171s STEP: Saw pod success Apr 22 23:23:56.402: INFO: Pod "security-context-b41bb3a1-488d-4538-bf3e-e47c14669140" satisfied condition "Succeeded or Failed" Apr 22 23:23:56.403: INFO: Trying to get logs from node node2 pod security-context-b41bb3a1-488d-4538-bf3e-e47c14669140 container test-container: STEP: delete the pod Apr 22 23:23:56.414: INFO: Waiting for pod security-context-b41bb3a1-488d-4538-bf3e-e47c14669140 to disappear Apr 22 23:23:56.416: INFO: Pod security-context-b41bb3a1-488d-4538-bf3e-e47c14669140 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:23:56.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-113" for this suite. • [SLOW TEST:8.067 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 22 23:23:56.730: INFO: Waiting up to 5m0s for pod "security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb" in namespace "security-context-8949" to be "Succeeded or Failed" Apr 22 23:23:56.732: INFO: Pod "security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.932058ms Apr 22 23:23:58.736: INFO: Pod "security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00557175s Apr 22 23:24:00.739: INFO: Pod "security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008403836s STEP: Saw pod success Apr 22 23:24:00.739: INFO: Pod "security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb" satisfied condition "Succeeded or Failed" Apr 22 23:24:00.741: INFO: Trying to get logs from node node1 pod security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb container test-container: STEP: delete the pod Apr 22 23:24:00.755: INFO: Waiting for pod security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb to disappear Apr 22 23:24:00.757: INFO: Pod security-context-3653ddc0-4070-4815-bf9c-4424b017c9eb no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:00.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8949" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":7,"skipped":871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:48.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-9c498073-2ecb-4ea1-b6e2-7af1d552b698 in namespace container-probe-1932 Apr 22 23:22:58.556: INFO: Started pod startup-9c498073-2ecb-4ea1-b6e2-7af1d552b698 in namespace container-probe-1932 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:58.558: INFO: Initial restart count of pod startup-9c498073-2ecb-4ea1-b6e2-7af1d552b698 is 0 Apr 22 23:24:02.692: INFO: Restart count of pod container-probe-1932/startup-9c498073-2ecb-4ea1-b6e2-7af1d552b698 is now 1 (1m4.133001266s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:02.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1932" for this suite. • [SLOW TEST:74.196 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":5,"skipped":451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:24:02.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:02.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-5924" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":6,"skipped":526,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:24:00.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 22 23:24:00.880: INFO: Waiting up to 5m0s for pod "security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889" in namespace "security-context-8546" to be "Succeeded or Failed" Apr 22 23:24:00.883: INFO: Pod "security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534213ms Apr 22 23:24:02.888: INFO: Pod "security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007630406s Apr 22 23:24:04.896: INFO: Pod "security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016414486s STEP: Saw pod success Apr 22 23:24:04.897: INFO: Pod "security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889" satisfied condition "Succeeded or Failed" Apr 22 23:24:04.901: INFO: Trying to get logs from node node1 pod security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889 container test-container: STEP: delete the pod Apr 22 23:24:04.914: INFO: Waiting for pod security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889 to disappear Apr 22 23:24:04.916: INFO: Pod security-context-44659da5-5ca7-40a0-aa09-8e9d2b13d889 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:04.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8546" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":8,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:24:02.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Apr 22 23:24:02.930: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-49e99b79-73b2-4f32-9de6-a67a2c9bae32" in namespace "security-context-test-5652" to be "Succeeded or Failed" Apr 22 23:24:02.933: INFO: Pod "alpine-nnp-true-49e99b79-73b2-4f32-9de6-a67a2c9bae32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165856ms Apr 22 23:24:04.935: INFO: Pod "alpine-nnp-true-49e99b79-73b2-4f32-9de6-a67a2c9bae32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004507381s Apr 22 23:24:06.941: INFO: Pod "alpine-nnp-true-49e99b79-73b2-4f32-9de6-a67a2c9bae32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010793324s Apr 22 23:24:06.941: INFO: Pod "alpine-nnp-true-49e99b79-73b2-4f32-9de6-a67a2c9bae32" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:06.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5652" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:47.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 22 23:23:56.948: INFO: start=2022-04-22 23:23:51.927282451 +0000 UTC m=+118.478541604, now=2022-04-22 23:23:56.948483722 +0000 UTC m=+123.499742916, kubelet pod: {"metadata":{"name":"pod-submit-remove-163f0d9b-3919-42d2-b74f-e8cd853b9d2d","namespace":"pods-4832","uid":"08bb1e16-5512-4874-b9d6-f2f1a69ade2d","resourceVersion":"78355","creationTimestamp":"2022-04-22T23:23:47Z","deletionTimestamp":"2022-04-22T23:24:21Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"893165973"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-04-22T23:23:47.912283141Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-04-22T23:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-qbjnn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-qbjnn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.159","podIPs":[{"ip":"10.244.3.159"}],"startTime":"2022-04-22T23:23:47Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-04-22T23:23:50Z","finishedAt":"2022-04-22T23:23:52Z","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da","started":false}],"qosClass":"BestEffort"}} Apr 22 23:24:01.945: INFO: start=2022-04-22 23:23:51.927282451 +0000 UTC m=+118.478541604, now=2022-04-22 23:24:01.94502931 +0000 UTC m=+128.496288464, kubelet pod: {"metadata":{"name":"pod-submit-remove-163f0d9b-3919-42d2-b74f-e8cd853b9d2d","namespace":"pods-4832","uid":"08bb1e16-5512-4874-b9d6-f2f1a69ade2d","resourceVersion":"78355","creationTimestamp":"2022-04-22T23:23:47Z","deletionTimestamp":"2022-04-22T23:24:21Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"893165973"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-04-22T23:23:47.912283141Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-04-22T23:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-qbjnn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-qbjnn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.159","podIPs":[{"ip":"10.244.3.159"}],"startTime":"2022-04-22T23:23:47Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-04-22T23:23:50Z","finishedAt":"2022-04-22T23:23:52Z","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da","started":false}],"qosClass":"BestEffort"}} Apr 22 23:24:06.946: INFO: start=2022-04-22 23:23:51.927282451 +0000 UTC m=+118.478541604, now=2022-04-22 23:24:06.946220945 +0000 UTC m=+133.497480097, kubelet pod: {"metadata":{"name":"pod-submit-remove-163f0d9b-3919-42d2-b74f-e8cd853b9d2d","namespace":"pods-4832","uid":"08bb1e16-5512-4874-b9d6-f2f1a69ade2d","resourceVersion":"78355","creationTimestamp":"2022-04-22T23:23:47Z","deletionTimestamp":"2022-04-22T23:24:21Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"893165973"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.159\"\n ],\n \"mac\": \"72:a3:84:34:a9:ba\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-04-22T23:23:47.912283141Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-04-22T23:23:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-qbjnn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-qbjnn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:54Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-22T23:23:47Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.159","podIPs":[{"ip":"10.244.3.159"}],"startTime":"2022-04-22T23:23:47Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-04-22T23:23:50Z","finishedAt":"2022-04-22T23:23:52Z","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://dab3592fe7877b9655afb3c5f3d2a6ab4772b5e24bcab3fe19e2c92075f6a9da","started":false}],"qosClass":"BestEffort"}} Apr 22 23:24:11.943: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:11.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4832" for this suite. • [SLOW TEST:24.084 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":8,"skipped":854,"failed":0} SSSSSSSSSS ------------------------------ Apr 22 23:24:11.977: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:24:07.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 22 23:24:07.120: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Apr 22 23:24:07.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5094 create -f -' Apr 22 23:24:07.585: INFO: stderr: "" Apr 22 23:24:07.585: INFO: stdout: "secret/test-secret created\n" Apr 22 23:24:07.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5094 create -f -' Apr 22 23:24:07.919: INFO: stderr: "" Apr 22 23:24:07.919: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Apr 22 23:24:13.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5094 logs secret-test-pod test-container' Apr 22 23:24:14.093: INFO: stderr: "" Apr 22 23:24:14.093: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:14.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5094" for this suite. • [SLOW TEST:7.011 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":8,"skipped":589,"failed":0} Apr 22 23:24:14.104: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:04.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Apr 22 23:22:09.252: INFO: watch delete seen for pod-submit-status-0-0 Apr 22 23:22:09.253: INFO: Pod pod-submit-status-0-0 on node node1 timings total=4.823649969s t=689ms run=0s execute=0s Apr 22 23:22:12.251: INFO: watch delete seen for pod-submit-status-2-0 Apr 22 23:22:12.251: INFO: Pod pod-submit-status-2-0 on node node1 timings total=7.822032797s t=266ms run=0s execute=0s Apr 22 23:22:17.802: INFO: watch delete seen for pod-submit-status-1-0 Apr 22 23:22:17.802: INFO: Pod pod-submit-status-1-0 on node node1 timings total=13.372872486s t=177ms run=0s execute=0s Apr 22 23:22:30.508: INFO: watch delete seen for pod-submit-status-1-1 Apr 22 23:22:30.508: INFO: Pod pod-submit-status-1-1 on node node1 timings total=12.705908648s t=1.101s run=0s execute=0s Apr 22 23:22:31.472: INFO: watch delete seen for pod-submit-status-2-1 Apr 22 23:22:31.472: INFO: Pod pod-submit-status-2-1 on node node1 timings total=19.221047305s t=1.407s run=0s execute=0s Apr 22 23:22:33.922: INFO: watch delete seen for pod-submit-status-0-1 Apr 22 23:22:33.922: INFO: Pod pod-submit-status-0-1 on node node2 timings total=24.669827896s t=966ms run=0s execute=0s Apr 22 23:22:36.441: INFO: watch delete seen for pod-submit-status-2-2 Apr 22 23:22:36.441: INFO: Pod pod-submit-status-2-2 on node node1 timings total=4.968747723s t=1.384s run=0s execute=0s Apr 22 23:22:36.466: INFO: watch delete seen for pod-submit-status-1-2 Apr 22 23:22:36.466: INFO: Pod pod-submit-status-1-2 on node node1 timings total=5.958545561s t=1.571s run=0s execute=0s Apr 22 23:22:36.867: INFO: watch delete seen for pod-submit-status-0-2 Apr 22 23:22:36.867: INFO: Pod pod-submit-status-0-2 on node node1 timings total=2.944527665s t=862ms run=0s execute=0s Apr 22 23:22:42.868: INFO: watch delete seen for pod-submit-status-2-3 Apr 22 23:22:42.868: INFO: Pod pod-submit-status-2-3 on node node1 timings total=6.42733788s t=1.174s run=0s execute=0s Apr 22 23:22:43.328: INFO: watch delete seen for pod-submit-status-0-3 Apr 22 23:22:43.328: INFO: Pod pod-submit-status-0-3 on node node2 timings total=6.461121326s t=1.439s run=0s execute=0s Apr 22 23:22:44.466: INFO: watch delete seen for pod-submit-status-1-3 Apr 22 23:22:44.466: INFO: Pod pod-submit-status-1-3 on node node1 timings total=7.999970829s t=1.44s run=3s execute=0s Apr 22 23:22:49.067: INFO: watch delete seen for pod-submit-status-2-4 Apr 22 23:22:49.067: INFO: Pod pod-submit-status-2-4 on node node1 timings total=6.198778989s t=624ms run=0s execute=0s Apr 22 23:22:50.098: INFO: watch delete seen for pod-submit-status-1-4 Apr 22 23:22:50.098: INFO: Pod pod-submit-status-1-4 on node node1 timings total=5.631539307s t=1.683s run=2s execute=0s Apr 22 23:22:50.121: INFO: watch delete seen for pod-submit-status-0-4 Apr 22 23:22:50.121: INFO: Pod pod-submit-status-0-4 on node node2 timings total=6.792906414s t=1.895s run=0s execute=0s Apr 22 23:22:53.727: INFO: watch delete seen for pod-submit-status-0-5 Apr 22 23:22:53.727: INFO: Pod pod-submit-status-0-5 on node node2 timings total=3.605981476s t=149ms run=0s execute=0s Apr 22 23:22:55.720: INFO: watch delete seen for pod-submit-status-2-5 Apr 22 23:22:55.720: INFO: Pod pod-submit-status-2-5 on node node2 timings total=6.652781288s t=311ms run=0s execute=0s Apr 22 23:22:57.474: INFO: watch delete seen for pod-submit-status-1-5 Apr 22 23:22:57.474: INFO: Pod pod-submit-status-1-5 on node node2 timings total=7.375837908s t=1.962s run=0s execute=0s Apr 22 23:23:02.323: INFO: watch delete seen for pod-submit-status-1-6 Apr 22 23:23:02.323: INFO: Pod pod-submit-status-1-6 on node node2 timings total=4.849210776s t=1.458s run=0s execute=0s Apr 22 23:23:05.008: INFO: watch delete seen for pod-submit-status-0-6 Apr 22 23:23:05.008: INFO: Pod pod-submit-status-0-6 on node node2 timings total=11.281241636s t=1.963s run=0s execute=0s Apr 22 23:23:07.921: INFO: watch delete seen for pod-submit-status-1-7 Apr 22 23:23:07.921: INFO: Pod pod-submit-status-1-7 on node node2 timings total=5.597402444s t=480ms run=0s execute=0s Apr 22 23:23:13.723: INFO: watch delete seen for pod-submit-status-0-7 Apr 22 23:23:13.723: INFO: Pod pod-submit-status-0-7 on node node2 timings total=8.714356498s t=466ms run=0s execute=0s Apr 22 23:23:16.721: INFO: watch delete seen for pod-submit-status-1-8 Apr 22 23:23:16.721: INFO: Pod pod-submit-status-1-8 on node node2 timings total=8.800600292s t=1.074s run=0s execute=0s Apr 22 23:23:19.121: INFO: watch delete seen for pod-submit-status-2-6 Apr 22 23:23:19.121: INFO: Pod pod-submit-status-2-6 on node node2 timings total=23.400551165s t=593ms run=0s execute=0s Apr 22 23:23:23.123: INFO: watch delete seen for pod-submit-status-2-7 Apr 22 23:23:23.123: INFO: Pod pod-submit-status-2-7 on node node2 timings total=4.002224118s t=808ms run=0s execute=0s Apr 22 23:23:25.121: INFO: watch delete seen for pod-submit-status-1-9 Apr 22 23:23:25.121: INFO: Pod pod-submit-status-1-9 on node node2 timings total=8.399668454s t=1.019s run=0s execute=0s Apr 22 23:23:25.723: INFO: watch delete seen for pod-submit-status-0-8 Apr 22 23:23:25.723: INFO: Pod pod-submit-status-0-8 on node node2 timings total=11.999822187s t=61ms run=0s execute=0s Apr 22 23:23:28.323: INFO: watch delete seen for pod-submit-status-2-8 Apr 22 23:23:28.323: INFO: Pod pod-submit-status-2-8 on node node2 timings total=5.200171599s t=1.427s run=0s execute=0s Apr 22 23:23:29.923: INFO: watch delete seen for pod-submit-status-0-9 Apr 22 23:23:29.923: INFO: Pod pod-submit-status-0-9 on node node2 timings total=4.2001208s t=525ms run=0s execute=0s Apr 22 23:23:32.921: INFO: watch delete seen for pod-submit-status-0-10 Apr 22 23:23:32.921: INFO: Pod pod-submit-status-0-10 on node node2 timings total=2.997688764s t=1.504s run=0s execute=0s Apr 22 23:23:38.721: INFO: watch delete seen for pod-submit-status-1-10 Apr 22 23:23:38.722: INFO: Pod pod-submit-status-1-10 on node node2 timings total=13.600391273s t=676ms run=0s execute=0s Apr 22 23:23:40.759: INFO: watch delete seen for pod-submit-status-0-11 Apr 22 23:23:40.759: INFO: Pod pod-submit-status-0-11 on node node2 timings total=7.838487698s t=378ms run=0s execute=0s Apr 22 23:23:41.195: INFO: watch delete seen for pod-submit-status-2-9 Apr 22 23:23:41.195: INFO: Pod pod-submit-status-2-9 on node node2 timings total=12.871920983s t=563ms run=0s execute=0s Apr 22 23:23:46.322: INFO: watch delete seen for pod-submit-status-0-12 Apr 22 23:23:46.323: INFO: Pod pod-submit-status-0-12 on node node2 timings total=5.56320994s t=1.183s run=0s execute=0s Apr 22 23:23:46.922: INFO: watch delete seen for pod-submit-status-2-10 Apr 22 23:23:46.922: INFO: Pod pod-submit-status-2-10 on node node2 timings total=5.726757152s t=232ms run=0s execute=0s Apr 22 23:23:49.121: INFO: watch delete seen for pod-submit-status-1-11 Apr 22 23:23:49.121: INFO: Pod pod-submit-status-1-11 on node node2 timings total=10.399874925s t=16ms run=0s execute=0s Apr 22 23:23:51.721: INFO: watch delete seen for pod-submit-status-2-11 Apr 22 23:23:51.721: INFO: Pod pod-submit-status-2-11 on node node2 timings total=4.799037435s t=1.42s run=0s execute=0s Apr 22 23:23:52.921: INFO: watch delete seen for pod-submit-status-1-12 Apr 22 23:23:52.921: INFO: Pod pod-submit-status-1-12 on node node2 timings total=3.79911081s t=1.465s run=0s execute=0s Apr 22 23:23:57.913: INFO: watch delete seen for pod-submit-status-0-13 Apr 22 23:23:57.913: INFO: Pod pod-submit-status-0-13 on node node1 timings total=11.590071232s t=1.165s run=0s execute=0s Apr 22 23:24:00.225: INFO: watch delete seen for pod-submit-status-0-14 Apr 22 23:24:00.225: INFO: Pod pod-submit-status-0-14 on node node1 timings total=2.311842904s t=575ms run=0s execute=0s Apr 22 23:24:07.831: INFO: watch delete seen for pod-submit-status-1-13 Apr 22 23:24:07.831: INFO: Pod pod-submit-status-1-13 on node node1 timings total=14.909953132s t=692ms run=0s execute=0s Apr 22 23:24:07.918: INFO: watch delete seen for pod-submit-status-2-12 Apr 22 23:24:07.918: INFO: Pod pod-submit-status-2-12 on node node2 timings total=16.197282497s t=1.284s run=0s execute=0s Apr 22 23:24:11.025: INFO: watch delete seen for pod-submit-status-2-13 Apr 22 23:24:11.025: INFO: Pod pod-submit-status-2-13 on node node2 timings total=3.106635635s t=223ms run=0s execute=0s Apr 22 23:24:17.819: INFO: watch delete seen for pod-submit-status-2-14 Apr 22 23:24:17.819: INFO: Pod pod-submit-status-2-14 on node node1 timings total=6.793471737s t=283ms run=0s execute=0s Apr 22 23:24:17.894: INFO: watch delete seen for pod-submit-status-1-14 Apr 22 23:24:17.894: INFO: Pod pod-submit-status-1-14 on node node2 timings total=10.063452403s t=1.435s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:17.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-720" for this suite. • [SLOW TEST:133.497 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:32.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 22 23:23:32.546: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Apr 22 23:23:32.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-391 create -f -' Apr 22 23:23:33.004: INFO: stderr: "" Apr 22 23:23:33.004: INFO: stdout: "pod/liveness-exec created\n" Apr 22 23:23:33.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-391 create -f -' Apr 22 23:23:33.389: INFO: stderr: "" Apr 22 23:23:33.389: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Apr 22 23:23:43.399: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:45.397: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:45.401: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:47.399: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:47.404: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:49.403: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:49.407: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:51.408: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:51.410: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:53.412: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:53.413: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:55.416: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:55.416: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:57.420: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:57.420: INFO: Pod: liveness-http, restart count:0 Apr 22 23:23:59.423: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:23:59.424: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:01.426: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:01.426: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:03.432: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:03.432: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:05.436: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:05.436: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:07.441: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:07.441: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:09.444: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:09.444: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:11.448: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:11.448: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:13.453: INFO: Pod: liveness-http, restart count:0 Apr 22 23:24:13.453: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:15.457: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:15.457: INFO: Pod: liveness-http, restart count:1 Apr 22 23:24:15.457: INFO: Saw liveness-http restart, succeeded... Apr 22 23:24:17.462: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:19.465: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:21.469: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:23.472: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:25.476: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:27.480: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:29.483: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:31.488: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:33.491: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:35.495: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:37.500: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:39.503: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:41.509: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:43.513: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:45.517: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:47.522: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:49.527: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:51.532: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:53.535: INFO: Pod: liveness-exec, restart count:0 Apr 22 23:24:55.538: INFO: Pod: liveness-exec, restart count:1 Apr 22 23:24:55.538: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:55.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-391" for this suite. • [SLOW TEST:83.026 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":8,"skipped":925,"failed":0} Apr 22 23:24:55.548: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:24:05.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-f30e8ec1-09f0-41c2-ac18-c01e52260543 in namespace container-probe-5542 Apr 22 23:24:09.199: INFO: Started pod busybox-f30e8ec1-09f0-41c2-ac18-c01e52260543 in namespace container-probe-5542 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:24:09.201: INFO: Initial restart count of pod busybox-f30e8ec1-09f0-41c2-ac18-c01e52260543 is 0 Apr 22 23:24:59.301: INFO: Restart count of pod container-probe-5542/busybox-f30e8ec1-09f0-41c2-ac18-c01e52260543 is now 1 (50.100787064s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:24:59.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5542" for this suite. • [SLOW TEST:54.160 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":9,"skipped":1036,"failed":0} Apr 22 23:24:59.319: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:03.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Apr 22 23:22:03.920: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:05.923: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:07.925: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:09.925: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:11.925: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Apr 22 23:23:19.957: INFO: getRestartDelay: restartCount = 3, finishedAt=2022-04-22 23:22:41 +0000 UTC restartedAt=2022-04-22 23:23:09 +0000 UTC (28s) STEP: getting restart delay-1 Apr 22 23:24:00.107: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-04-22 23:23:14 +0000 UTC restartedAt=2022-04-22 23:23:57 +0000 UTC (43s) STEP: getting restart delay-2 Apr 22 23:25:36.492: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-04-22 23:24:02 +0000 UTC restartedAt=2022-04-22 23:25:35 +0000 UTC (1m33s) STEP: updating the image Apr 22 23:25:37.003: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Apr 22 23:26:02.072: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-04-22 23:25:46 +0000 UTC restartedAt=2022-04-22 23:26:01 +0000 UTC (15s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:26:02.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3055" for this suite. • [SLOW TEST:238.196 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":156,"failed":0} Apr 22 23:26:02.084: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:21:55.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-75196bbc-048c-4e1b-b1fb-2332ccd3f16d in namespace container-probe-9198 Apr 22 23:22:05.671: INFO: Started pod startup-75196bbc-048c-4e1b-b1fb-2332ccd3f16d in namespace container-probe-9198 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:22:05.675: INFO: Initial restart count of pod startup-75196bbc-048c-4e1b-b1fb-2332ccd3f16d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:26:06.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9198" for this suite. • [SLOW TEST:250.841 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":116,"failed":0} Apr 22 23:26:06.477: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:56.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-61dd671b-1624-438c-b8fc-30027c45b079 in namespace container-probe-1395 Apr 22 23:24:00.350: INFO: Started pod liveness-61dd671b-1624-438c-b8fc-30027c45b079 in namespace container-probe-1395 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:24:00.353: INFO: Initial restart count of pod liveness-61dd671b-1624-438c-b8fc-30027c45b079 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:28:00.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1395" for this suite. • [SLOW TEST:244.665 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":9,"skipped":1591,"failed":0} Apr 22 23:28:00.976: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:23:14.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Apr 22 23:23:14.045: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Apr 22 23:23:15.056: INFO: node status heartbeat is unchanged for 1.003676699s, waiting for 1m20s Apr 22 23:23:16.058: INFO: node status heartbeat is unchanged for 2.006345221s, waiting for 1m20s Apr 22 23:23:17.058: INFO: node status heartbeat is unchanged for 3.006332975s, waiting for 1m20s Apr 22 23:23:18.057: INFO: node status heartbeat is unchanged for 4.00488424s, waiting for 1m20s Apr 22 23:23:19.057: INFO: node status heartbeat is unchanged for 5.004585314s, waiting for 1m20s Apr 22 23:23:20.056: INFO: node status heartbeat is unchanged for 6.004117495s, waiting for 1m20s Apr 22 23:23:21.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:23:21.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:23:22.056: INFO: node status heartbeat is unchanged for 999.400005ms, waiting for 1m20s Apr 22 23:23:23.056: INFO: node status heartbeat is unchanged for 1.999773626s, waiting for 1m20s Apr 22 23:23:24.056: INFO: node status heartbeat is unchanged for 2.999448504s, waiting for 1m20s Apr 22 23:23:25.055: INFO: node status heartbeat is unchanged for 3.999050453s, waiting for 1m20s Apr 22 23:23:26.056: INFO: node status heartbeat is unchanged for 4.999580446s, waiting for 1m20s Apr 22 23:23:27.056: INFO: node status heartbeat is unchanged for 5.999574682s, waiting for 1m20s Apr 22 23:23:28.058: INFO: node status heartbeat is unchanged for 7.001784508s, waiting for 1m20s Apr 22 23:23:29.058: INFO: node status heartbeat is unchanged for 8.001530775s, waiting for 1m20s Apr 22 23:23:30.056: INFO: node status heartbeat is unchanged for 8.999604986s, waiting for 1m20s Apr 22 23:23:31.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:23:31.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:23:32.058: INFO: node status heartbeat is unchanged for 1.000327531s, waiting for 1m20s Apr 22 23:23:33.056: INFO: node status heartbeat is unchanged for 1.998989699s, waiting for 1m20s Apr 22 23:23:34.055: INFO: node status heartbeat is unchanged for 2.998106659s, waiting for 1m20s Apr 22 23:23:35.056: INFO: node status heartbeat is unchanged for 3.999075809s, waiting for 1m20s Apr 22 23:23:36.056: INFO: node status heartbeat is unchanged for 4.999187103s, waiting for 1m20s Apr 22 23:23:37.057: INFO: node status heartbeat is unchanged for 5.999620773s, waiting for 1m20s Apr 22 23:23:38.056: INFO: node status heartbeat is unchanged for 6.998549178s, waiting for 1m20s Apr 22 23:23:39.056: INFO: node status heartbeat is unchanged for 7.998708542s, waiting for 1m20s Apr 22 23:23:40.056: INFO: node status heartbeat is unchanged for 8.998693247s, waiting for 1m20s Apr 22 23:23:41.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:23:41.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "5e6f6d1644f942b881dbf2d9722ff85b", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "cc218e06-beff-411d-b91e-f4a272d9c83f", KernelVersion: "3.10.0-1160.62.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 22 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d"..., "k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2"}, SizeBytes: 44576952},    {Names: {"localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407"..., "localhost:30500/sriov-device-plugin:v3.3.2"}, SizeBytes: 42676189}, +  { +  Names: []string{ +  "k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34d"..., +  "k8s.gcr.io/e2e-test-images/nonroot:1.1", +  }, +  SizeBytes: 42321438, +  },    {Names: {"quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72"..., "quay.io/prometheus/node-exporter:v1.0.1"}, SizeBytes: 26430341},    {Names: {"aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae"..., "aquasec/kube-bench:0.3.1"}, SizeBytes: 19301876},    ... // 9 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Apr 22 23:23:42.056: INFO: node status heartbeat is unchanged for 999.745378ms, waiting for 1m20s Apr 22 23:23:43.058: INFO: node status heartbeat is unchanged for 2.001613827s, waiting for 1m20s Apr 22 23:23:44.057: INFO: node status heartbeat is unchanged for 3.000345777s, waiting for 1m20s Apr 22 23:23:45.056: INFO: node status heartbeat is unchanged for 3.999505751s, waiting for 1m20s Apr 22 23:23:46.058: INFO: node status heartbeat is unchanged for 5.001333087s, waiting for 1m20s Apr 22 23:23:47.059: INFO: node status heartbeat is unchanged for 6.002151931s, waiting for 1m20s Apr 22 23:23:48.056: INFO: node status heartbeat is unchanged for 6.999290467s, waiting for 1m20s Apr 22 23:23:49.057: INFO: node status heartbeat is unchanged for 8.000791643s, waiting for 1m20s Apr 22 23:23:50.056: INFO: node status heartbeat is unchanged for 8.999469483s, waiting for 1m20s Apr 22 23:23:51.058: INFO: node status heartbeat is unchanged for 10.001575237s, waiting for 1m20s Apr 22 23:23:52.056: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 22 23:23:52.060: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:23:53.056: INFO: node status heartbeat is unchanged for 1.000410386s, waiting for 1m20s Apr 22 23:23:54.056: INFO: node status heartbeat is unchanged for 2.000347579s, waiting for 1m20s Apr 22 23:23:55.056: INFO: node status heartbeat is unchanged for 3.00056725s, waiting for 1m20s Apr 22 23:23:56.055: INFO: node status heartbeat is unchanged for 3.999456038s, waiting for 1m20s Apr 22 23:23:57.056: INFO: node status heartbeat is unchanged for 5.000828435s, waiting for 1m20s Apr 22 23:23:58.056: INFO: node status heartbeat is unchanged for 6.000364212s, waiting for 1m20s Apr 22 23:23:59.057: INFO: node status heartbeat is unchanged for 7.001321947s, waiting for 1m20s Apr 22 23:24:00.058: INFO: node status heartbeat is unchanged for 8.001952523s, waiting for 1m20s Apr 22 23:24:01.058: INFO: node status heartbeat is unchanged for 9.002665686s, waiting for 1m20s Apr 22 23:24:02.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:02.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:24:03.058: INFO: node status heartbeat is unchanged for 1.000889263s, waiting for 1m20s Apr 22 23:24:04.057: INFO: node status heartbeat is unchanged for 2.000260614s, waiting for 1m20s Apr 22 23:24:05.057: INFO: node status heartbeat is unchanged for 2.999886547s, waiting for 1m20s Apr 22 23:24:06.062: INFO: node status heartbeat is unchanged for 4.005388095s, waiting for 1m20s Apr 22 23:24:07.057: INFO: node status heartbeat is unchanged for 5.000019535s, waiting for 1m20s Apr 22 23:24:08.056: INFO: node status heartbeat is unchanged for 5.998625998s, waiting for 1m20s Apr 22 23:24:09.057: INFO: node status heartbeat is unchanged for 6.999877767s, waiting for 1m20s Apr 22 23:24:10.057: INFO: node status heartbeat is unchanged for 8.000175461s, waiting for 1m20s Apr 22 23:24:11.059: INFO: node status heartbeat is unchanged for 9.002172244s, waiting for 1m20s Apr 22 23:24:12.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:12.060: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "5e6f6d1644f942b881dbf2d9722ff85b", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "cc218e06-beff-411d-b91e-f4a272d9c83f", KernelVersion: "3.10.0-1160.62.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 30 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., "k8s.gcr.io/e2e-test-images/nonewprivs:1.3"}, SizeBytes: 7107254},    {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., +  "gcr.io/authenticated-image-pulling/alpine:3.7", +  }, +  SizeBytes: 4206620, +  },    {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361},    {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369},    ... // 2 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Apr 22 23:24:13.057: INFO: node status heartbeat is unchanged for 1.001178048s, waiting for 1m20s Apr 22 23:24:14.055: INFO: node status heartbeat is unchanged for 1.999741539s, waiting for 1m20s Apr 22 23:24:15.058: INFO: node status heartbeat is unchanged for 3.002925448s, waiting for 1m20s Apr 22 23:24:16.058: INFO: node status heartbeat is unchanged for 4.002153919s, waiting for 1m20s Apr 22 23:24:17.057: INFO: node status heartbeat is unchanged for 5.00179288s, waiting for 1m20s Apr 22 23:24:18.056: INFO: node status heartbeat is unchanged for 6.00105483s, waiting for 1m20s Apr 22 23:24:19.058: INFO: node status heartbeat is unchanged for 7.002725514s, waiting for 1m20s Apr 22 23:24:20.056: INFO: node status heartbeat is unchanged for 8.000701442s, waiting for 1m20s Apr 22 23:24:21.058: INFO: node status heartbeat is unchanged for 9.002649647s, waiting for 1m20s Apr 22 23:24:22.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:22.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:24:23.059: INFO: node status heartbeat is unchanged for 1.001773788s, waiting for 1m20s Apr 22 23:24:24.057: INFO: node status heartbeat is unchanged for 1.99976435s, waiting for 1m20s Apr 22 23:24:25.059: INFO: node status heartbeat is unchanged for 3.00171063s, waiting for 1m20s Apr 22 23:24:26.057: INFO: node status heartbeat is unchanged for 3.99972595s, waiting for 1m20s Apr 22 23:24:27.057: INFO: node status heartbeat is unchanged for 4.999743442s, waiting for 1m20s Apr 22 23:24:28.057: INFO: node status heartbeat is unchanged for 5.99983417s, waiting for 1m20s Apr 22 23:24:29.057: INFO: node status heartbeat is unchanged for 6.999930836s, waiting for 1m20s Apr 22 23:24:30.057: INFO: node status heartbeat is unchanged for 8.000077939s, waiting for 1m20s Apr 22 23:24:31.056: INFO: node status heartbeat is unchanged for 8.99908091s, waiting for 1m20s Apr 22 23:24:32.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:32.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:24:33.058: INFO: node status heartbeat is unchanged for 1.000097983s, waiting for 1m20s Apr 22 23:24:34.095: INFO: node status heartbeat is unchanged for 2.036993395s, waiting for 1m20s Apr 22 23:24:35.057: INFO: node status heartbeat is unchanged for 2.998938488s, waiting for 1m20s Apr 22 23:24:36.057: INFO: node status heartbeat is unchanged for 3.998590076s, waiting for 1m20s Apr 22 23:24:37.056: INFO: node status heartbeat is unchanged for 4.997782587s, waiting for 1m20s Apr 22 23:24:38.056: INFO: node status heartbeat is unchanged for 5.997694399s, waiting for 1m20s Apr 22 23:24:39.057: INFO: node status heartbeat is unchanged for 6.999392972s, waiting for 1m20s Apr 22 23:24:40.056: INFO: node status heartbeat is unchanged for 7.998512565s, waiting for 1m20s Apr 22 23:24:41.058: INFO: node status heartbeat is unchanged for 8.999799864s, waiting for 1m20s Apr 22 23:24:42.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:42.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:24:43.057: INFO: node status heartbeat is unchanged for 1.001116343s, waiting for 1m20s Apr 22 23:24:44.056: INFO: node status heartbeat is unchanged for 1.999898499s, waiting for 1m20s Apr 22 23:24:45.059: INFO: node status heartbeat is unchanged for 3.003200725s, waiting for 1m20s Apr 22 23:24:46.055: INFO: node status heartbeat is unchanged for 3.999197265s, waiting for 1m20s Apr 22 23:24:47.056: INFO: node status heartbeat is unchanged for 4.999571612s, waiting for 1m20s Apr 22 23:24:48.056: INFO: node status heartbeat is unchanged for 5.999853731s, waiting for 1m20s Apr 22 23:24:49.057: INFO: node status heartbeat is unchanged for 7.000346142s, waiting for 1m20s Apr 22 23:24:50.058: INFO: node status heartbeat is unchanged for 8.001252323s, waiting for 1m20s Apr 22 23:24:51.058: INFO: node status heartbeat is unchanged for 9.001454459s, waiting for 1m20s Apr 22 23:24:52.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:24:52.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:24:53.058: INFO: node status heartbeat is unchanged for 1.001207365s, waiting for 1m20s Apr 22 23:24:54.057: INFO: node status heartbeat is unchanged for 1.999380993s, waiting for 1m20s Apr 22 23:24:55.057: INFO: node status heartbeat is unchanged for 3.000063052s, waiting for 1m20s Apr 22 23:24:56.057: INFO: node status heartbeat is unchanged for 3.999665573s, waiting for 1m20s Apr 22 23:24:57.058: INFO: node status heartbeat is unchanged for 5.000833823s, waiting for 1m20s Apr 22 23:24:58.058: INFO: node status heartbeat is unchanged for 6.000438563s, waiting for 1m20s Apr 22 23:24:59.056: INFO: node status heartbeat is unchanged for 6.999190386s, waiting for 1m20s Apr 22 23:25:00.057: INFO: node status heartbeat is unchanged for 7.99937506s, waiting for 1m20s Apr 22 23:25:01.056: INFO: node status heartbeat is unchanged for 8.998614099s, waiting for 1m20s Apr 22 23:25:02.056: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 22 23:25:02.060: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:24:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:03.057: INFO: node status heartbeat is unchanged for 1.000863087s, waiting for 1m20s Apr 22 23:25:04.056: INFO: node status heartbeat is unchanged for 2.000239898s, waiting for 1m20s Apr 22 23:25:05.059: INFO: node status heartbeat is unchanged for 3.002991926s, waiting for 1m20s Apr 22 23:25:06.058: INFO: node status heartbeat is unchanged for 4.001882452s, waiting for 1m20s Apr 22 23:25:07.058: INFO: node status heartbeat is unchanged for 5.002450966s, waiting for 1m20s Apr 22 23:25:08.058: INFO: node status heartbeat is unchanged for 6.001683823s, waiting for 1m20s Apr 22 23:25:09.057: INFO: node status heartbeat is unchanged for 7.000783135s, waiting for 1m20s Apr 22 23:25:10.056: INFO: node status heartbeat is unchanged for 8.000112715s, waiting for 1m20s Apr 22 23:25:11.056: INFO: node status heartbeat is unchanged for 8.999953518s, waiting for 1m20s Apr 22 23:25:12.058: INFO: node status heartbeat is unchanged for 10.001874832s, waiting for 1m20s Apr 22 23:25:13.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:25:13.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:14.057: INFO: node status heartbeat is unchanged for 1.000317553s, waiting for 1m20s Apr 22 23:25:15.058: INFO: node status heartbeat is unchanged for 2.000870595s, waiting for 1m20s Apr 22 23:25:16.057: INFO: node status heartbeat is unchanged for 3.000236878s, waiting for 1m20s Apr 22 23:25:17.057: INFO: node status heartbeat is unchanged for 3.999827927s, waiting for 1m20s Apr 22 23:25:18.056: INFO: node status heartbeat is unchanged for 4.999353156s, waiting for 1m20s Apr 22 23:25:19.055: INFO: node status heartbeat is unchanged for 5.998487761s, waiting for 1m20s Apr 22 23:25:20.057: INFO: node status heartbeat is unchanged for 7.000487425s, waiting for 1m20s Apr 22 23:25:21.056: INFO: node status heartbeat is unchanged for 7.998924079s, waiting for 1m20s Apr 22 23:25:22.055: INFO: node status heartbeat is unchanged for 8.99864848s, waiting for 1m20s Apr 22 23:25:23.059: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:25:23.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:24.057: INFO: node status heartbeat is unchanged for 997.710053ms, waiting for 1m20s Apr 22 23:25:25.058: INFO: node status heartbeat is unchanged for 1.999178239s, waiting for 1m20s Apr 22 23:25:26.058: INFO: node status heartbeat is unchanged for 2.998707391s, waiting for 1m20s Apr 22 23:25:27.057: INFO: node status heartbeat is unchanged for 3.998065226s, waiting for 1m20s Apr 22 23:25:28.058: INFO: node status heartbeat is unchanged for 4.998751422s, waiting for 1m20s Apr 22 23:25:29.056: INFO: node status heartbeat is unchanged for 5.997221881s, waiting for 1m20s Apr 22 23:25:30.057: INFO: node status heartbeat is unchanged for 6.998441569s, waiting for 1m20s Apr 22 23:25:31.057: INFO: node status heartbeat is unchanged for 7.998328778s, waiting for 1m20s Apr 22 23:25:32.057: INFO: node status heartbeat is unchanged for 8.998394077s, waiting for 1m20s Apr 22 23:25:33.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:25:33.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:34.056: INFO: node status heartbeat is unchanged for 999.15309ms, waiting for 1m20s Apr 22 23:25:35.058: INFO: node status heartbeat is unchanged for 2.000517953s, waiting for 1m20s Apr 22 23:25:36.057: INFO: node status heartbeat is unchanged for 3.000165441s, waiting for 1m20s Apr 22 23:25:37.056: INFO: node status heartbeat is unchanged for 3.99905137s, waiting for 1m20s Apr 22 23:25:38.059: INFO: node status heartbeat is unchanged for 5.002415829s, waiting for 1m20s Apr 22 23:25:39.056: INFO: node status heartbeat is unchanged for 5.999217802s, waiting for 1m20s Apr 22 23:25:40.058: INFO: node status heartbeat is unchanged for 7.001161915s, waiting for 1m20s Apr 22 23:25:41.057: INFO: node status heartbeat is unchanged for 8.000284026s, waiting for 1m20s Apr 22 23:25:42.059: INFO: node status heartbeat is unchanged for 9.00239181s, waiting for 1m20s Apr 22 23:25:43.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:25:43.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:44.058: INFO: node status heartbeat is unchanged for 1.001695286s, waiting for 1m20s Apr 22 23:25:45.058: INFO: node status heartbeat is unchanged for 2.001288499s, waiting for 1m20s Apr 22 23:25:46.058: INFO: node status heartbeat is unchanged for 3.002081843s, waiting for 1m20s Apr 22 23:25:47.057: INFO: node status heartbeat is unchanged for 4.000408328s, waiting for 1m20s Apr 22 23:25:48.058: INFO: node status heartbeat is unchanged for 5.001332806s, waiting for 1m20s Apr 22 23:25:49.056: INFO: node status heartbeat is unchanged for 5.999981527s, waiting for 1m20s Apr 22 23:25:50.057: INFO: node status heartbeat is unchanged for 7.000445842s, waiting for 1m20s Apr 22 23:25:51.056: INFO: node status heartbeat is unchanged for 7.999550863s, waiting for 1m20s Apr 22 23:25:52.057: INFO: node status heartbeat is unchanged for 9.000210779s, waiting for 1m20s Apr 22 23:25:53.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:25:53.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:25:54.056: INFO: node status heartbeat is unchanged for 998.941546ms, waiting for 1m20s Apr 22 23:25:55.056: INFO: node status heartbeat is unchanged for 1.9990893s, waiting for 1m20s Apr 22 23:25:56.055: INFO: node status heartbeat is unchanged for 2.997917776s, waiting for 1m20s Apr 22 23:25:57.056: INFO: node status heartbeat is unchanged for 3.998623268s, waiting for 1m20s Apr 22 23:25:58.057: INFO: node status heartbeat is unchanged for 4.99948755s, waiting for 1m20s Apr 22 23:25:59.057: INFO: node status heartbeat is unchanged for 5.999280369s, waiting for 1m20s Apr 22 23:26:00.056: INFO: node status heartbeat is unchanged for 6.998960384s, waiting for 1m20s Apr 22 23:26:01.056: INFO: node status heartbeat is unchanged for 7.998629119s, waiting for 1m20s Apr 22 23:26:02.055: INFO: node status heartbeat is unchanged for 8.998166919s, waiting for 1m20s Apr 22 23:26:03.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:03.060: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:04.056: INFO: node status heartbeat is unchanged for 1.0007094s, waiting for 1m20s Apr 22 23:26:05.058: INFO: node status heartbeat is unchanged for 2.002001136s, waiting for 1m20s Apr 22 23:26:06.057: INFO: node status heartbeat is unchanged for 3.001687496s, waiting for 1m20s Apr 22 23:26:07.062: INFO: node status heartbeat is unchanged for 4.006151031s, waiting for 1m20s Apr 22 23:26:08.058: INFO: node status heartbeat is unchanged for 5.001951843s, waiting for 1m20s Apr 22 23:26:09.058: INFO: node status heartbeat is unchanged for 6.002012548s, waiting for 1m20s Apr 22 23:26:10.057: INFO: node status heartbeat is unchanged for 7.001346174s, waiting for 1m20s Apr 22 23:26:11.062: INFO: node status heartbeat is unchanged for 8.00590554s, waiting for 1m20s Apr 22 23:26:12.058: INFO: node status heartbeat is unchanged for 9.002129271s, waiting for 1m20s Apr 22 23:26:13.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:13.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:14.058: INFO: node status heartbeat is unchanged for 999.597499ms, waiting for 1m20s Apr 22 23:26:15.057: INFO: node status heartbeat is unchanged for 1.999141483s, waiting for 1m20s Apr 22 23:26:16.059: INFO: node status heartbeat is unchanged for 3.000716163s, waiting for 1m20s Apr 22 23:26:17.058: INFO: node status heartbeat is unchanged for 3.999280669s, waiting for 1m20s Apr 22 23:26:18.059: INFO: node status heartbeat is unchanged for 5.000920036s, waiting for 1m20s Apr 22 23:26:19.057: INFO: node status heartbeat is unchanged for 5.998222702s, waiting for 1m20s Apr 22 23:26:20.056: INFO: node status heartbeat is unchanged for 6.997981449s, waiting for 1m20s Apr 22 23:26:21.059: INFO: node status heartbeat is unchanged for 8.000548108s, waiting for 1m20s Apr 22 23:26:22.058: INFO: node status heartbeat is unchanged for 9.000121774s, waiting for 1m20s Apr 22 23:26:23.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:23.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:24.057: INFO: node status heartbeat is unchanged for 1.000605864s, waiting for 1m20s Apr 22 23:26:25.058: INFO: node status heartbeat is unchanged for 2.00231787s, waiting for 1m20s Apr 22 23:26:26.057: INFO: node status heartbeat is unchanged for 3.000796162s, waiting for 1m20s Apr 22 23:26:27.056: INFO: node status heartbeat is unchanged for 4.000022954s, waiting for 1m20s Apr 22 23:26:28.060: INFO: node status heartbeat is unchanged for 5.003654139s, waiting for 1m20s Apr 22 23:26:29.057: INFO: node status heartbeat is unchanged for 6.000704548s, waiting for 1m20s Apr 22 23:26:30.057: INFO: node status heartbeat is unchanged for 7.001032947s, waiting for 1m20s Apr 22 23:26:31.057: INFO: node status heartbeat is unchanged for 8.001414583s, waiting for 1m20s Apr 22 23:26:32.058: INFO: node status heartbeat is unchanged for 9.001731612s, waiting for 1m20s Apr 22 23:26:33.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:33.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:34.057: INFO: node status heartbeat is unchanged for 998.596355ms, waiting for 1m20s Apr 22 23:26:35.059: INFO: node status heartbeat is unchanged for 2.001016317s, waiting for 1m20s Apr 22 23:26:36.057: INFO: node status heartbeat is unchanged for 2.999169072s, waiting for 1m20s Apr 22 23:26:37.059: INFO: node status heartbeat is unchanged for 4.000741449s, waiting for 1m20s Apr 22 23:26:38.059: INFO: node status heartbeat is unchanged for 5.001377661s, waiting for 1m20s Apr 22 23:26:39.057: INFO: node status heartbeat is unchanged for 5.998768124s, waiting for 1m20s Apr 22 23:26:40.056: INFO: node status heartbeat is unchanged for 6.998139788s, waiting for 1m20s Apr 22 23:26:41.058: INFO: node status heartbeat is unchanged for 7.999591133s, waiting for 1m20s Apr 22 23:26:42.059: INFO: node status heartbeat is unchanged for 9.000460242s, waiting for 1m20s Apr 22 23:26:43.056: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:43.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:44.056: INFO: node status heartbeat is unchanged for 999.627987ms, waiting for 1m20s Apr 22 23:26:45.057: INFO: node status heartbeat is unchanged for 2.000859801s, waiting for 1m20s Apr 22 23:26:46.057: INFO: node status heartbeat is unchanged for 3.000230813s, waiting for 1m20s Apr 22 23:26:47.057: INFO: node status heartbeat is unchanged for 4.000774072s, waiting for 1m20s Apr 22 23:26:48.058: INFO: node status heartbeat is unchanged for 5.001167601s, waiting for 1m20s Apr 22 23:26:49.057: INFO: node status heartbeat is unchanged for 6.000972194s, waiting for 1m20s Apr 22 23:26:50.058: INFO: node status heartbeat is unchanged for 7.00116695s, waiting for 1m20s Apr 22 23:26:51.057: INFO: node status heartbeat is unchanged for 8.000274488s, waiting for 1m20s Apr 22 23:26:52.058: INFO: node status heartbeat is unchanged for 9.001467894s, waiting for 1m20s Apr 22 23:26:53.059: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:26:53.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:26:54.057: INFO: node status heartbeat is unchanged for 998.493663ms, waiting for 1m20s Apr 22 23:26:55.059: INFO: node status heartbeat is unchanged for 2.000763841s, waiting for 1m20s Apr 22 23:26:56.059: INFO: node status heartbeat is unchanged for 3.000020617s, waiting for 1m20s Apr 22 23:26:57.058: INFO: node status heartbeat is unchanged for 3.999769435s, waiting for 1m20s Apr 22 23:26:58.058: INFO: node status heartbeat is unchanged for 4.999194174s, waiting for 1m20s Apr 22 23:26:59.058: INFO: node status heartbeat is unchanged for 5.999715538s, waiting for 1m20s Apr 22 23:27:00.059: INFO: node status heartbeat is unchanged for 6.999938148s, waiting for 1m20s Apr 22 23:27:01.056: INFO: node status heartbeat is unchanged for 7.997667821s, waiting for 1m20s Apr 22 23:27:02.057: INFO: node status heartbeat is unchanged for 8.998248505s, waiting for 1m20s Apr 22 23:27:03.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:03.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:26:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:04.057: INFO: node status heartbeat is unchanged for 999.711935ms, waiting for 1m20s Apr 22 23:27:05.059: INFO: node status heartbeat is unchanged for 2.001770047s, waiting for 1m20s Apr 22 23:27:06.057: INFO: node status heartbeat is unchanged for 2.999794939s, waiting for 1m20s Apr 22 23:27:07.059: INFO: node status heartbeat is unchanged for 4.001574725s, waiting for 1m20s Apr 22 23:27:08.058: INFO: node status heartbeat is unchanged for 5.000772799s, waiting for 1m20s Apr 22 23:27:09.057: INFO: node status heartbeat is unchanged for 5.99950386s, waiting for 1m20s Apr 22 23:27:10.057: INFO: node status heartbeat is unchanged for 6.999822324s, waiting for 1m20s Apr 22 23:27:11.057: INFO: node status heartbeat is unchanged for 8.000105857s, waiting for 1m20s Apr 22 23:27:12.057: INFO: node status heartbeat is unchanged for 8.999860715s, waiting for 1m20s Apr 22 23:27:13.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:13.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:14.057: INFO: node status heartbeat is unchanged for 1.000192953s, waiting for 1m20s Apr 22 23:27:15.056: INFO: node status heartbeat is unchanged for 1.99931642s, waiting for 1m20s Apr 22 23:27:16.057: INFO: node status heartbeat is unchanged for 3.000051717s, waiting for 1m20s Apr 22 23:27:17.058: INFO: node status heartbeat is unchanged for 4.001388751s, waiting for 1m20s Apr 22 23:27:18.057: INFO: node status heartbeat is unchanged for 4.999852901s, waiting for 1m20s Apr 22 23:27:19.057: INFO: node status heartbeat is unchanged for 6.000064473s, waiting for 1m20s Apr 22 23:27:20.059: INFO: node status heartbeat is unchanged for 7.001867175s, waiting for 1m20s Apr 22 23:27:21.057: INFO: node status heartbeat is unchanged for 8.000221085s, waiting for 1m20s Apr 22 23:27:22.057: INFO: node status heartbeat is unchanged for 9.000467268s, waiting for 1m20s Apr 22 23:27:23.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:23.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:24.057: INFO: node status heartbeat is unchanged for 999.771425ms, waiting for 1m20s Apr 22 23:27:25.056: INFO: node status heartbeat is unchanged for 1.999163356s, waiting for 1m20s Apr 22 23:27:26.056: INFO: node status heartbeat is unchanged for 2.999325092s, waiting for 1m20s Apr 22 23:27:27.057: INFO: node status heartbeat is unchanged for 3.99984428s, waiting for 1m20s Apr 22 23:27:28.056: INFO: node status heartbeat is unchanged for 4.999172202s, waiting for 1m20s Apr 22 23:27:29.058: INFO: node status heartbeat is unchanged for 6.000729669s, waiting for 1m20s Apr 22 23:27:30.059: INFO: node status heartbeat is unchanged for 7.001635328s, waiting for 1m20s Apr 22 23:27:31.058: INFO: node status heartbeat is unchanged for 8.000648892s, waiting for 1m20s Apr 22 23:27:32.058: INFO: node status heartbeat is unchanged for 9.001195198s, waiting for 1m20s Apr 22 23:27:33.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:33.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:34.057: INFO: node status heartbeat is unchanged for 999.69047ms, waiting for 1m20s Apr 22 23:27:35.058: INFO: node status heartbeat is unchanged for 2.000772176s, waiting for 1m20s Apr 22 23:27:36.059: INFO: node status heartbeat is unchanged for 3.001800534s, waiting for 1m20s Apr 22 23:27:37.058: INFO: node status heartbeat is unchanged for 4.000621514s, waiting for 1m20s Apr 22 23:27:38.057: INFO: node status heartbeat is unchanged for 4.999785416s, waiting for 1m20s Apr 22 23:27:39.058: INFO: node status heartbeat is unchanged for 6.000068707s, waiting for 1m20s Apr 22 23:27:40.057: INFO: node status heartbeat is unchanged for 6.999147717s, waiting for 1m20s Apr 22 23:27:41.059: INFO: node status heartbeat is unchanged for 8.001467089s, waiting for 1m20s Apr 22 23:27:42.058: INFO: node status heartbeat is unchanged for 8.999960924s, waiting for 1m20s Apr 22 23:27:43.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:43.061: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:44.057: INFO: node status heartbeat is unchanged for 999.80655ms, waiting for 1m20s Apr 22 23:27:45.057: INFO: node status heartbeat is unchanged for 2.000461167s, waiting for 1m20s Apr 22 23:27:46.056: INFO: node status heartbeat is unchanged for 2.999486398s, waiting for 1m20s Apr 22 23:27:47.057: INFO: node status heartbeat is unchanged for 3.999944077s, waiting for 1m20s Apr 22 23:27:48.056: INFO: node status heartbeat is unchanged for 4.999206003s, waiting for 1m20s Apr 22 23:27:49.056: INFO: node status heartbeat is unchanged for 5.999548273s, waiting for 1m20s Apr 22 23:27:50.059: INFO: node status heartbeat is unchanged for 7.002011615s, waiting for 1m20s Apr 22 23:27:51.055: INFO: node status heartbeat is unchanged for 7.998440887s, waiting for 1m20s Apr 22 23:27:52.057: INFO: node status heartbeat is unchanged for 8.99981834s, waiting for 1m20s Apr 22 23:27:53.057: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:27:53.062: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:27:54.057: INFO: node status heartbeat is unchanged for 999.746859ms, waiting for 1m20s Apr 22 23:27:55.057: INFO: node status heartbeat is unchanged for 1.999251346s, waiting for 1m20s Apr 22 23:27:56.057: INFO: node status heartbeat is unchanged for 2.999686885s, waiting for 1m20s Apr 22 23:27:57.059: INFO: node status heartbeat is unchanged for 4.001186136s, waiting for 1m20s Apr 22 23:27:58.055: INFO: node status heartbeat is unchanged for 4.997751179s, waiting for 1m20s Apr 22 23:27:59.057: INFO: node status heartbeat is unchanged for 5.99941426s, waiting for 1m20s Apr 22 23:28:00.059: INFO: node status heartbeat is unchanged for 7.001340295s, waiting for 1m20s Apr 22 23:28:01.057: INFO: node status heartbeat is unchanged for 7.999140293s, waiting for 1m20s Apr 22 23:28:02.058: INFO: node status heartbeat is unchanged for 9.001000414s, waiting for 1m20s Apr 22 23:28:03.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:28:03.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:27:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:28:04.056: INFO: node status heartbeat is unchanged for 997.750136ms, waiting for 1m20s Apr 22 23:28:05.059: INFO: node status heartbeat is unchanged for 2.00067895s, waiting for 1m20s Apr 22 23:28:06.055: INFO: node status heartbeat is unchanged for 2.99689842s, waiting for 1m20s Apr 22 23:28:07.056: INFO: node status heartbeat is unchanged for 3.997822862s, waiting for 1m20s Apr 22 23:28:08.062: INFO: node status heartbeat is unchanged for 5.003677424s, waiting for 1m20s Apr 22 23:28:09.057: INFO: node status heartbeat is unchanged for 5.998868524s, waiting for 1m20s Apr 22 23:28:10.058: INFO: node status heartbeat is unchanged for 6.999249742s, waiting for 1m20s Apr 22 23:28:11.057: INFO: node status heartbeat is unchanged for 7.999066339s, waiting for 1m20s Apr 22 23:28:12.059: INFO: node status heartbeat is unchanged for 9.000307923s, waiting for 1m20s Apr 22 23:28:13.058: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 22 23:28:13.063: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-22 20:02:30 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-22 23:28:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-22 19:58:33 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-22 19:59:43 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 22 23:28:14.056: INFO: node status heartbeat is unchanged for 997.750224ms, waiting for 1m20s Apr 22 23:28:14.058: INFO: node status heartbeat is unchanged for 1.000158585s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:28:14.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2963" for this suite. • [SLOW TEST:300.051 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":256,"failed":0} Apr 22 23:28:14.079: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:22:03.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Apr 22 23:22:03.606: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:05.609: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:07.611: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:09.611: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:11.611: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:22:13.610: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Apr 22 23:33:32.992: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-04-22 23:28:26 +0000 UTC restartedAt=2022-04-22 23:33:32 +0000 UTC (5m6s) Apr 22 23:38:47.385: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-04-22 23:33:37 +0000 UTC restartedAt=2022-04-22 23:38:46 +0000 UTC (5m9s) Apr 22 23:43:58.730: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-04-22 23:38:51 +0000 UTC restartedAt=2022-04-22 23:43:57 +0000 UTC (5m6s) STEP: getting restart delay after a capped delay Apr 22 23:49:06.165: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-04-22 23:44:02 +0000 UTC restartedAt=2022-04-22 23:49:04 +0000 UTC (5m2s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:49:06.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5180" for this suite. • [SLOW TEST:1622.601 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":129,"failed":0} Apr 22 23:49:06.177: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":2,"skipped":58,"failed":0} Apr 22 23:24:17.907: INFO: Running AfterSuite actions on all nodes Apr 22 23:49:06.249: INFO: Running AfterSuite actions on node 1 Apr 22 23:49:06.250: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5773 Specs in 1631.211 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m12.836756387s Test Suite Failed