I0325 09:57:35.129969 7 e2e.go:129] Starting e2e run "a6d48a68-55bb-47f8-bdb9-4da5a89878c5" on Ginkgo node 1 {"msg":"Test Suite starting","total":14,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616666253 - Will randomize all specs Will run 14 of 5737 specs Mar 25 09:57:35.155: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:57:35.158: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 09:57:35.237: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 09:57:35.472: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 09:57:35.472: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 09:57:35.472: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 09:57:35.571: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 09:57:35.571: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 09:57:35.571: INFO: e2e test version: v1.21.0-beta.1 Mar 25 09:57:35.572: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 09:57:35.572: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:57:35.707: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:57:35.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts Mar 25 09:57:36.051: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488 STEP: Creating a pod to test service account token: Mar 25 09:57:36.133: INFO: Waiting up to 5m0s for pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" in namespace "svcaccounts-5232" to be "Succeeded or Failed" Mar 25 09:57:36.178: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.475664ms Mar 25 09:57:38.844: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711124726s Mar 25 09:57:41.144: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.011561558s Mar 25 09:57:43.337: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.204742952s Mar 25 09:57:45.389: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Running", Reason="", readiness=true. Elapsed: 9.256859349s Mar 25 09:57:47.844: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.71169711s STEP: Saw pod success Mar 25 09:57:47.845: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" satisfied condition "Succeeded or Failed" Mar 25 09:57:48.060: INFO: Trying to get logs from node latest-worker pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a container agnhost-container: STEP: delete the pod Mar 25 09:57:48.556: INFO: Waiting for pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a to disappear Mar 25 09:57:48.633: INFO: Pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a no longer exists STEP: Creating a pod to test service account token: Mar 25 09:57:48.870: INFO: Waiting up to 5m0s for pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" in namespace "svcaccounts-5232" to be "Succeeded or Failed" Mar 25 09:57:49.559: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 688.454381ms Mar 25 09:57:51.570: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699247704s Mar 25 09:57:54.560: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.689667837s Mar 25 09:57:59.963: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.09273894s Mar 25 09:58:02.648: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.777839144s Mar 25 09:58:04.881: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.010724634s Mar 25 09:58:07.690: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Running", Reason="", readiness=true. Elapsed: 18.819386587s Mar 25 09:58:10.029: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.158418651s STEP: Saw pod success Mar 25 09:58:10.029: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" satisfied condition "Succeeded or Failed" Mar 25 09:58:10.352: INFO: Trying to get logs from node latest-worker pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a container agnhost-container: STEP: delete the pod Mar 25 09:58:11.376: INFO: Waiting for pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a to disappear Mar 25 09:58:11.412: INFO: Pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a no longer exists STEP: Creating a pod to test service account token: Mar 25 09:58:11.422: INFO: Waiting up to 5m0s for pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" in namespace "svcaccounts-5232" to be "Succeeded or Failed" Mar 25 09:58:11.675: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 253.397641ms Mar 25 09:58:14.289: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867437117s Mar 25 09:58:17.194: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.771483569s Mar 25 09:58:19.241: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.819145202s Mar 25 09:58:22.257: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.834767896s Mar 25 09:58:24.264: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Running", Reason="", readiness=true. Elapsed: 12.841886651s Mar 25 09:58:26.307: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.884745296s STEP: Saw pod success Mar 25 09:58:26.307: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" satisfied condition "Succeeded or Failed" Mar 25 09:58:26.661: INFO: Trying to get logs from node latest-worker pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a container agnhost-container: STEP: delete the pod Mar 25 09:58:26.946: INFO: Waiting for pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a to disappear Mar 25 09:58:27.103: INFO: Pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a no longer exists STEP: Creating a pod to test service account token: Mar 25 09:58:27.166: INFO: Waiting up to 5m0s for pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" in namespace "svcaccounts-5232" to be "Succeeded or Failed" Mar 25 09:58:27.718: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 551.386467ms Mar 25 09:58:29.777: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.610934104s Mar 25 09:58:31.889: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722547982s Mar 25 09:58:33.893: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Running", Reason="", readiness=true. Elapsed: 6.726327126s Mar 25 09:58:35.941: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.774223761s STEP: Saw pod success Mar 25 09:58:35.941: INFO: Pod "test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a" satisfied condition "Succeeded or Failed" Mar 25 09:58:35.943: INFO: Trying to get logs from node latest-worker2 pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a container agnhost-container: STEP: delete the pod Mar 25 09:58:36.267: INFO: Waiting for pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a to disappear Mar 25 09:58:36.311: INFO: Pod test-pod-8bb2fdc5-660b-4f9d-9f14-36cabc47363a no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:58:36.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5232" for this suite. • [SLOW TEST:60.612 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":14,"completed":1,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:584 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:58:36.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should support InClusterConfig with token rotation [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:584 Mar 25 09:58:36.817: INFO: created pod Mar 25 09:58:36.817: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Mar 25 09:58:36.817: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-7423" to be "running and ready" Mar 25 09:58:36.863: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 45.962322ms Mar 25 09:58:38.872: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055047617s Mar 25 09:58:40.875: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 4.057486273s Mar 25 09:58:40.875: INFO: Pod "inclusterclient" satisfied condition "running and ready" Mar 25 09:58:40.875: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Mar 25 09:58:40.875: INFO: pod is ready Mar 25 09:59:40.875: INFO: polling logs Mar 25 09:59:40.985: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Mar 25 10:00:40.875: INFO: polling logs Mar 25 10:00:40.880: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Mar 25 10:01:40.875: INFO: polling logs Mar 25 10:01:41.534: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Mar 25 10:02:40.876: INFO: polling logs Mar 25 10:02:40.980: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Mar 25 10:03:40.875: INFO: polling logs Mar 25 10:03:41.238: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Mar 25 10:04:40.875: INFO: polling logs Mar 25 10:04:41.017: FAIL: Unexpected error: inclusterclient reported an error: saw status=failed I0325 09:58:40.175411 1 main.go:61] started I0325 09:59:10.176725 1 main.go:80] calling /healthz I0325 09:59:10.177241 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 09:59:40.176736 1 main.go:80] calling /healthz I0325 09:59:40.176997 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:00:10.176708 1 main.go:80] calling /healthz I0325 10:00:10.176995 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:00:40.176738 1 main.go:80] calling /healthz I0325 10:00:40.177629 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:01:10.176753 1 main.go:80] calling /healthz I0325 10:01:10.177208 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:01:40.176715 1 main.go:80] calling /healthz I0325 10:01:40.177120 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:02:10.176749 1 main.go:80] calling /healthz I0325 10:02:10.177119 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:02:40.176716 1 main.go:80] calling /healthz I0325 10:02:40.176958 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:03:10.176723 1 main.go:80] calling /healthz I0325 10:03:10.177059 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:03:40.176725 1 main.go:80] calling /healthz I0325 10:03:40.176995 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:04:10.176729 1 main.go:80] calling /healthz I0325 10:04:10.177167 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE E0325 10:04:12.185501 1 main.go:83] status=failed E0325 10:04:12.185535 1 main.go:84] error checking /healthz: an error on the server ("[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0325 10:04:40.176725 1 main.go:80] calling /healthz I0325 10:04:40.177071 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0013a1200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0013a1200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0013a1200, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "svcaccounts-7423". STEP: Found 4 events. Mar 25 10:04:41.195: INFO: At 2021-03-25 09:58:36 +0000 UTC - event for inclusterclient: {default-scheduler } Scheduled: Successfully assigned svcaccounts-7423/inclusterclient to latest-worker2 Mar 25 10:04:41.195: INFO: At 2021-03-25 09:58:38 +0000 UTC - event for inclusterclient: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:04:41.195: INFO: At 2021-03-25 09:58:39 +0000 UTC - event for inclusterclient: {kubelet latest-worker2} Created: Created container inclusterclient Mar 25 10:04:41.195: INFO: At 2021-03-25 09:58:40 +0000 UTC - event for inclusterclient: {kubelet latest-worker2} Started: Started container inclusterclient Mar 25 10:04:41.209: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:04:41.209: INFO: inclusterclient latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 09:58:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 09:58:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 09:58:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 09:58:36 +0000 UTC }] Mar 25 10:04:41.209: INFO: Mar 25 10:04:41.213: INFO: Logging node info for node latest-control-plane Mar 25 10:04:41.215: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1056889 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:04:41.216: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:04:41.218: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:04:41.235: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:04:41.235: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:04:41.235: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 10:04:41.235: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container etcd ready: true, restart count 0 Mar 25 10:04:41.235: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:04:41.235: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:04:41.235: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:04:41.235: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container coredns ready: true, restart count 0 Mar 25 10:04:41.235: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.235: INFO: Container coredns ready: true, restart count 0 W0325 10:04:41.241807 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:04:41.323: INFO: Latency metrics for node latest-control-plane Mar 25 10:04:41.323: INFO: Logging node info for node latest-worker Mar 25 10:04:41.590: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1056346 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:04:41.590: INFO: Logging kubelet events for node latest-worker Mar 25 10:04:41.594: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:04:41.608: INFO: externalname-service-xxftp started at 2021-03-25 10:04:05 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container externalname-service ready: true, restart count 0 Mar 25 10:04:41.608: INFO: rally-23ef9705-vk70qya1-wqkd8 started at 2021-03-25 10:04:33 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container rally-23ef9705-vk70qya1 ready: true, restart count 0 Mar 25 10:04:41.608: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:04:41.608: INFO: affinity-nodeport-mr2fc started at 2021-03-25 10:02:24 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 10:04:41.608: INFO: affinity-nodeport-9ztqj started at 2021-03-25 10:02:24 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 10:04:41.608: INFO: execpodcl9zh started at 2021-03-25 10:04:25 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:04:41.608: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.608: INFO: Container kindnet-cni ready: true, restart count 0 W0325 10:04:41.614096 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:04:41.843: INFO: Latency metrics for node latest-worker Mar 25 10:04:41.843: INFO: Logging node info for node latest-worker2 Mar 25 10:04:41.847: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1056450 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:04:41.848: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:04:41.851: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:04:41.859: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container c ready: false, restart count 0 Mar 25 10:04:41.859: INFO: ss2-0 started at 2021-03-25 10:02:11 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container webserver ready: false, restart count 0 Mar 25 10:04:41.859: INFO: execpod-affinitydr656 started at 2021-03-25 10:02:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:04:41.859: INFO: externalname-service-qxkj2 started at 2021-03-25 10:04:05 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container externalname-service ready: true, restart count 0 Mar 25 10:04:41.859: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:04:41.859: INFO: inclusterclient started at 2021-03-25 09:58:36 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container inclusterclient ready: true, restart count 0 Mar 25 10:04:41.859: INFO: affinity-nodeport-9dt26 started at 2021-03-25 10:02:24 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 10:04:41.859: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.859: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:04:41.859: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:04:41.860: INFO: Container kube-proxy ready: true, restart count 0 W0325 10:04:41.866404 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:04:41.977: INFO: Latency metrics for node latest-worker2 Mar 25 10:04:41.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7423" for this suite. • Failure [365.663 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support InClusterConfig with token rotation [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:584 Mar 25 10:04:41.017: Unexpected error: inclusterclient reported an error: saw status=failed I0325 09:58:40.175411 1 main.go:61] started I0325 09:59:10.176725 1 main.go:80] calling /healthz I0325 09:59:10.177241 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 09:59:40.176736 1 main.go:80] calling /healthz I0325 09:59:40.176997 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:00:10.176708 1 main.go:80] calling /healthz I0325 10:00:10.176995 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:00:40.176738 1 main.go:80] calling /healthz I0325 10:00:40.177629 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:01:10.176753 1 main.go:80] calling /healthz I0325 10:01:10.177208 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:01:40.176715 1 main.go:80] calling /healthz I0325 10:01:40.177120 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:02:10.176749 1 main.go:80] calling /healthz I0325 10:02:10.177119 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:02:40.176716 1 main.go:80] calling /healthz I0325 10:02:40.176958 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:03:10.176723 1 main.go:80] calling /healthz I0325 10:03:10.177059 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:03:40.176725 1 main.go:80] calling /healthz I0325 10:03:40.176995 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE I0325 10:04:10.176729 1 main.go:80] calling /healthz I0325 10:04:10.177167 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE E0325 10:04:12.185501 1 main.go:83] status=failed E0325 10:04:12.185535 1 main.go:84] error checking /healthz: an error on the server ("[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0325 10:04:40.176725 1 main.go:80] calling /healthz I0325 10:04:40.177071 1 main.go:97] authz_header=zjCz48fYyARUPIob26aFT7cwLSDXsHWVrds1DoZIvQE /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]","total":14,"completed":1,"skipped":452,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:84 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:04:41.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:04:42.980: INFO: >>> kubeConfig: /root/.kube/config [It] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:84 [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:04:42.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-103" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error","total":14,"completed":2,"skipped":593,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:59 [BeforeEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:04:43.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authn STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:39 [It] The kubelet's main port 10250 should reject requests with no credentials /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:59 Mar 25 10:04:43.332: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:04:45.374: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:04:47.343: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:04:49.609: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:04:51.512: INFO: The status of Pod agnhost-pod is Running (Ready = true) Mar 25 10:04:51.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=node-authn-1054 exec agnhost-pod -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' https://172.18.0.16:10250/metrics' Mar 25 10:04:59.969: INFO: stderr: "+ curl -sIk -o /dev/null -w '%{http_code}' https://172.18.0.16:10250/metrics\n" Mar 25 10:04:59.969: INFO: stdout: "401" Mar 25 10:04:59.969: INFO: stdout: 401 [AfterEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:04:59.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authn-1054" for this suite. • [SLOW TEST:16.976 seconds] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 The kubelet's main port 10250 should reject requests with no credentials /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:59 ------------------------------ {"msg":"PASSED [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials","total":14,"completed":3,"skipped":1639,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:187 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:04:59.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:00.243: INFO: >>> kubeConfig: /root/.kube/config [It] A node shouldn't be able to delete another node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:187 STEP: Create node foo by user: system:node:latest-control-plane [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:00.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-9019" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node","total":14,"completed":4,"skipped":2538,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:74 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:00.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:01.965: INFO: >>> kubeConfig: /root/.kube/config [It] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:74 [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:01.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-6939" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error","total":14,"completed":5,"skipped":2798,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:89 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:01.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:02.718: INFO: >>> kubeConfig: /root/.kube/config [It] Getting an existing configmap should exit with the Forbidden error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:89 STEP: Create a configmap for testing [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:02.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-87" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error","total":14,"completed":6,"skipped":3202,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:02.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support building a client with a CSR /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55 Mar 25 10:05:03.248: INFO: creating CSR Mar 25 10:05:03.272: INFO: approving CSR Mar 25 10:05:08.555: INFO: waiting for CSR to be signed Mar 25 10:05:13.602: INFO: testing the client Mar 25 10:05:13.602: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:05:13.603: INFO: creating CSR as new client [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:16.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4156" for this suite. • [SLOW TEST:13.692 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support building a client with a CSR /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":14,"completed":7,"skipped":3388,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should ensure a single API token exists /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:16.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure a single API token exists /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52 STEP: waiting for a single token reference Mar 25 10:05:18.941: INFO: default service account has a single secret reference STEP: ensuring the single token reference persists STEP: deleting the service account token STEP: waiting for a new token reference Mar 25 10:05:21.452: INFO: default service account has a new single secret reference STEP: ensuring the single token reference persists STEP: deleting the reference to the service account token STEP: waiting for a new token to be created and added Mar 25 10:05:24.315: INFO: default service account has a new single secret reference STEP: ensuring the single token reference persists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:26.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7675" for this suite. • [SLOW TEST:10.062 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should ensure a single API token exists /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":14,"completed":8,"skipped":3455,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:79 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:26.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:26.788: INFO: >>> kubeConfig: /root/.kube/config [It] Getting an existing secret should exit with the Forbidden error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:79 [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:26.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-9860" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error","total":14,"completed":9,"skipped":3615,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:167 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:26.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:27.797: INFO: >>> kubeConfig: /root/.kube/config [It] A node shouldn't be able to create another node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:167 STEP: Create node foo by user: system:node:latest-control-plane [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:28.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-8146" for this suite. •{"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node","total":14,"completed":10,"skipped":3824,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:68 [BeforeEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:28.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authn STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:39 [It] The kubelet can delegate ServiceAccount tokens to the API server /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:68 STEP: create a new ServiceAccount for authentication Mar 25 10:05:30.690: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:05:33.034: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:05:34.704: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:05:37.010: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:05:38.975: INFO: The status of Pod agnhost-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:05:40.828: INFO: The status of Pod agnhost-pod is Running (Ready = true) Mar 25 10:05:40.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=node-authn-2757 exec agnhost-pod -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' --header "Authorization: Bearer `cat /var/run/secrets/kubernetes.io/serviceaccount/token`" https://172.18.0.16:10250/metrics' Mar 25 10:05:41.341: INFO: stderr: "+ cat /var/run/secrets/kubernetes.io/serviceaccount/token\n+ curl -sIk -o /dev/null -w '%{http_code}' --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlVWcUdSSDZyeExRT2tOS2lIVm9YVVZnMllhQVV1cURVa041YUpGSWk3bm8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJub2RlLWF1dGhuLTI3NTciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1iZDVucCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzI3ZmVhMGMtNDkyOS00OTFjLWE3N2YtNDNjMWFmYjljYjBiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om5vZGUtYXV0aG4tMjc1NzpkZWZhdWx0In0.n_UteBWD9R6__zTef3H_t6JsoU8XHUwxLCeY5R9kI26XvfqfaYmyX701aFgcR0TDOTPs0xT_Ztp1brdkL2iP7eEYr9UI7bz2h6sMu1jmFOohKgPSu2Oxb4mxP2_fTWs0aeDqPbns5k9pYr0sG2iXnNvbK-sGuwAUUBpuGnmNtcipc7EYPpjre4xfNXYhI_P50F6u9uF1HCgjM8AXo4rCXv9NXgmyb5HuGlGQ0JkyR9T1UbNX6LGvhVLSJy83BQU_6qT89rDahQudrOgZnMWtDZBAZhpxkZAcYXJwJfSg_N6bIX3EE-hmAPqGyId4m9wCG7xoGifkWl72tKv0cngNMg' https://172.18.0.16:10250/metrics\n" Mar 25 10:05:41.341: INFO: stdout: "403" Mar 25 10:05:41.341: INFO: stdout: 403 [AfterEach] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:41.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authn-2757" for this suite. • [SLOW TEST:13.287 seconds] [sig-auth] [Feature:NodeAuthenticator] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 The kubelet can delegate ServiceAccount tokens to the API server /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authn.go:68 ------------------------------ {"msg":"PASSED [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server","total":14,"completed":11,"skipped":3949,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:106 [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:41.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-authz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:50 STEP: Creating a kubernetes client that impersonates a node Mar 25 10:05:43.732: INFO: >>> kubeConfig: /root/.kube/config [It] Getting a secret for a workload the node has access to should succeed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:106 STEP: Create a secret for testing STEP: Node should not get the secret STEP: Create a pod that use the secret STEP: The node should able to access the secret [AfterEach] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:47.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-authz-2568" for this suite. • [SLOW TEST:8.204 seconds] [sig-auth] [Feature:NodeAuthorizer] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 Getting a secret for a workload the node has access to should succeed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/node_authz.go:106 ------------------------------ {"msg":"PASSED [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed","total":14,"completed":12,"skipped":4198,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:34 [BeforeEach] [sig-auth] Metadata Concealment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:05:49.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename metadata-concealment STEP: Waiting for a default service account to be provisioned in namespace [It] should run a check-metadata-concealment job to completion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:34 Mar 25 10:05:51.802: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-auth] Metadata Concealment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:05:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "metadata-concealment-9496" for this suite. S [SKIPPING] [3.430 seconds] [sig-auth] Metadata Concealment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should run a check-metadata-concealment job to completion [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:34 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 25 10:05:53.130: INFO: Running AfterSuite actions on all nodes Mar 25 10:05:53.130: INFO: Running AfterSuite actions on node 1 Mar 25 10:05:53.130: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_auth/junit_01.xml {"msg":"Test Suite completed","total":14,"completed":12,"skipped":5724,"failed":1,"failures":["[sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]"]} Summarizing 1 Failure: [Fail] [sig-auth] ServiceAccounts [It] should support InClusterConfig with token rotation [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 13 of 5737 Specs in 497.977 seconds FAIL! -- 12 Passed | 1 Failed | 0 Pending | 5724 Skipped --- FAIL: TestE2E (498.03s) FAIL